\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Create a new case i.day5.a with the command:\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/my_cesm_code/cime/scripts\n",
+ "./create_newcase --case ~/cases/i.day5.a --compset I2000Clm50Sp --res f09_g17_gl4 --run-unsupported\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd ~/cases/i.day5.a \n",
+ "./case.setup\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Change the clm namelist using user_nl_clm by adding the following lines:\n",
+ "``` \n",
+ "hist_nhtfrq = -24\n",
+ "hist_mfilt = 6\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Check the namelist by running:\n",
+ "``` \n",
+ "./preview_namelists\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "If needed, change job queue, account number, or wallclock time. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=tutorial,PROJECT=UESM0013 --force,JOB_WALLCLOCK_TIME=0:15:00\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "i.day5.a. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/i.day5.a/lnd/hist\n",
+ "\n",
+ "ls \n",
+ "```\n",
+ "
\n",
+ "\n",
+ "(2) Look at the output using ncview\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "17cf7e19-1211-45f2-97cd-fe2badc69dac",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "CMIP6 2019.10",
+ "language": "python",
+ "name": "cmip6-201910"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.10"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/clm_ctsm/clm_exercise_2.ipynb b/_sources/notebooks/challenge/clm_ctsm/clm_exercise_2.ipynb
new file mode 100644
index 000000000..5deca8949
--- /dev/null
+++ b/_sources/notebooks/challenge/clm_ctsm/clm_exercise_2.ipynb
@@ -0,0 +1,188 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 2: Use the BGC model"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0037b73f-f174-48e7-8e4f-0744d7d23fe0",
+ "metadata": {},
+ "source": [
+ "We can use a different I compset: IHistClm50BgcCrop. This experiment is a 20th century transient run using GSWP3v1 and the biogeochemistry model including crops."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bdd131c8-d1ec-4568-81dd-701f8bdbe6cb",
+ "metadata": {},
+ "source": [
+ "![icase](../../../images/challenge/ihist.png)\n",
+ "\n",
+ "* Figure: IHIST compset definition.
*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Run an experimental case with prognostic BGC
\n",
+ " \n",
+ "Create a case called **i.day5.b** using the compset `IHistClm50BgcCrop` at `f09_g17_gl4` resolution. \n",
+ " \n",
+ "Set the run length to **5 days**. \n",
+ "\n",
+ "Build and run the model.\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Create a new case i.day5.b :\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/my_cesm_code/cime/scripts\n",
+ "./create_newcase --case ~/cases/i.day5.b --compset IHistClm50BgcCrop --res f09_g17_gl4 --run-unsupported\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd ~/cases/i.day5.b\n",
+ "./case.setup\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Note differences between this case and the control case:\n",
+ "``` \n",
+ "diff env_run.xml ../i.day5.a/env_run.xml\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Change the clm namelist using user_nl_clm by adding the following lines:\n",
+ "``` \n",
+ "hist_nhtfrq = -24\n",
+ "hist_mfilt = 6\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Check the namelist by running:\n",
+ "``` \n",
+ "./preview_namelists\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "If needed, change job queue, account number, or wallclock time. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=tutorial,PROJECT=UESM0013 --force,JOB_WALLCLOCK_TIME=0:15:00\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Build case:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Compare the namelists from the two experiments:\n",
+ "```\n",
+ "diff CaseDocs/lnd_in ../i.day5.a/CaseDocs/lnd_in\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Submit case:\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "i.day5.b. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/i.day5.b/lnd/hist\n",
+ "\n",
+ "ls \n",
+ "```\n",
+ "
\n",
+ "\n",
+ " \n",
+ "(2) Compare to control run:\n",
+ "```\n",
+ "ncdiff -v TLAI i.day5.b.clm2.XXX.nc /glade/derecho/scratch/$USER/archive/i.day5.a/lnd/hist/i.day5.a.clm2.XXX.nc i_diff.nc\n",
+ "\n",
+ "ncview i_diff.nc\n",
+ "```\n",
+ "\n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d69c456a-fdc6-4625-bbcc-ed32ab6ae8e8",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Test your understanding"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b3f28ecf-02b8-4cc0-bdc4-c36c8fc9e7aa",
+ "metadata": {},
+ "source": [
+ "- What changes do you see from the control case with the prognostic BGC?\n",
+ "- ... OTHERS?"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5b5a8cee-0ca9-4076-a731-e6ec200b70d4",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "CMIP6 2019.10",
+ "language": "python",
+ "name": "cmip6-201910"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.10"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/clm_ctsm/clm_exercise_3.ipynb b/_sources/notebooks/challenge/clm_ctsm/clm_exercise_3.ipynb
new file mode 100644
index 000000000..e4654ce82
--- /dev/null
+++ b/_sources/notebooks/challenge/clm_ctsm/clm_exercise_3.ipynb
@@ -0,0 +1,197 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 3: Modify input data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0037b73f-f174-48e7-8e4f-0744d7d23fe0",
+ "metadata": {},
+ "source": [
+ "We can modify the input to CLM by changing one of the plant functional type properties. We will then compare these results with the control experiment.\n",
+ "\n",
+ "Note that you will need to change a netcdf file for this exercise. Because netcdf are in binary format you will need a type of script or interperter to read the file and write it out again. (e.g. ferret, IDL, NCL, NCO, Perl, Python, Matlab, Yorick). Below in the solution we will show how to do this using NCO.\n",
+ "\n",
+ "NOTE: For any tasks other than setting up, building, submitting cases you should probably do these tasks on the Data Visualization Cluster - casper, and not on the derecho login nodes."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Run an experimental case
\n",
+ " \n",
+ "Create a case called **i.day5.a_pft** using the compset `I2000Clm50Sp` at `f09_g17_gl4` resolution. \n",
+ "\n",
+ "Look at variable “rholvis” in the forcing file using ncview or ncdump –v rholvis. This is the visible leaf reflectance for every pft. Modify the rholvis parameter to .\n",
+ "`/glade/campaign/cesm/cesmdata/cseg/inputdata/lnd/clm2/paramdata/clm5_params.c171117.nc`\n",
+ " \n",
+ "Set the run length to **5 days**. \n",
+ "\n",
+ "Build and run the model.\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Create a clone from the control experiment i.day5.a_pft :\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/my_cesm_code/cime/scripts\n",
+ "./create_clone --case ~/cases/i.day5.a_pft --clone ~/cases/i.day5.a\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Modify the rholvis parameter in the physiology file:\n",
+ "``` \n",
+ "cd /glade/derecho/scratch/$USER\n",
+ "cp /glade/campaign/cesm/cesmdata/cseg/inputdata/lnd/clm2/paramdata/clm5_params.c171117.nc .\n",
+ "chmod u+w clm5_params.c171117.nc\n",
+ "cp clm5_params.c171117.nc clm5_params.c171117.new.nc\n",
+ "ncap2 -A -v -s 'rholvis(4)=0.4' clm5_params.c171117.nc clm5_params.c171117.new.nc\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Check the new rholvis parameter to be sure the modification worked:\n",
+ "``` \n",
+ "ncdump -v rholvis clm5_params.c171117.new.nc\n",
+ "# and compare it to the original file\n",
+ "ncdiff clm5_params.c171117.nc clm5_params.c171117.new.nc ncdiff.nc\n",
+ "ncdump -v rholvis ncdiff.nc\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Case setup:\n",
+ "``` \n",
+ "cd ~/cases/i.day5.a_pft\n",
+ "./case.setup\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Change the clm namelist using user_nl_clm to point at the modified file. Add the following line:\n",
+ "``` \n",
+ "paramfile = '/glade/derecho/scratch/$USER/clm5_params.c171117.new.nc' \n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Check the namelist by running:\n",
+ "``` \n",
+ "./preview_namelists\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "If needed, change job queue, account number, or wallclock time. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=tutorial,PROJECT=UESM0013 --force,JOB_WALLCLOCK_TIME=0:15:00\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Build case:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Compare the namelists from the two experiments:\n",
+ "```\n",
+ "diff CaseDocs/lnd_in ../i.day5.a/CaseDocs/lnd_in\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Submit case:\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "i.day5.a. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/i.day5.a_pft/lnd/hist\n",
+ "\n",
+ "ls \n",
+ "```\n",
+ "
\n",
+ "\n",
+ "(2) Compare to control run:\n",
+ "```\n",
+ "ncdiff i.day5.a_pft.clm2.XXX.nc /glade/derecho/scratch/$USER/archive/i.day5.a/lnd/hist/i.day5.a.clm2.XXX.nc i_diff.nc\n",
+ "\n",
+ "ncview i_diff.nc\n",
+ "```\n",
+ "\n",
+ "\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b3ffd3cc-676e-4e7c-9ff4-cd65d4745397",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Test your understanding"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "03ac3664-5360-45d7-a3ad-0797a839a1d3",
+ "metadata": {},
+ "source": [
+ "- How did rholvis change (increase/decrease)? Given this, what do you expect the model response to be?\n",
+ "- What changes do you see from the control case with the modified rholvis parameter?\n",
+ "- ... OTHERS? "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "27f332b2-5799-43a8-9060-50315ebdf6dc",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "CMIP6 2019.10",
+ "language": "python",
+ "name": "cmip6-201910"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.10"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/mom/mom_exercise_5.ipynb b/_sources/notebooks/challenge/mom/mom_exercise_5.ipynb
new file mode 100644
index 000000000..552ceccc4
--- /dev/null
+++ b/_sources/notebooks/challenge/mom/mom_exercise_5.ipynb
@@ -0,0 +1,222 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 5: Control case using MOM6"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0bdbbd2b-8255-44f3-8c8c-da725d26f845",
+ "metadata": {},
+ "source": [
+ "\n",
+ "We will use a different CESM tag (cesm2_3_beta17) for this exercise.
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "92d9d3d2-3476-47fb-9b46-f26f5f79fdb2",
+ "metadata": {},
+ "source": [
+ "## Download CESM (tag cesm2_3_beta17)\n",
+ "\n",
+ "If needed, revisit the [Download CESM](https://ncar.github.io/CESM-Tutorial/notebooks/basics/code/git_download_cesm.html) section before executing the following steps.\n",
+ "\n",
+ "### Git Clone\n",
+ "\n",
+ "Change the current directory to the code workspace directory:
\n",
+ "\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code\n",
+ "```\n",
+ "
\n",
+ " \n",
+ "Download the cesm code to your code workspace directory as `cesm2_3_beta17`:
\n",
+ "```\n",
+ "git clone https://github.com/ESCOMP/CESM.git cesm2_3_beta17\n",
+ "cd cesm2_3_beta17\n",
+ "git checkout cesm2_3_beta17\n",
+ "``` \n",
+ "
\n",
+ "\n",
+ "### Download the Component Models with checkout_externals\n",
+ "\n",
+ "MOM6 is still an optional component in this version of CESM. Therefore, we need to run the `checkout_externals` command with the `-o` (optional) argument to download the required and optional component models:\n",
+ "\n",
+ "\n",
+ " \n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/cesm2_3_beta17\n",
+ "./manage_externals/checkout_externals -o\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "*Note: If you get a message about accepting a certificate permanently or temporarily, accept the certificate permanently. If you do not get this message, do not worry, you are still on track!\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d708d564-464b-49fb-96c5-a98b91e9b91b",
+ "metadata": {},
+ "source": [
+ "**Congratulations, you have now downloaded the cesm2_3_beta17 tag on your workspace!!**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a03b1449-5e68-43db-a36b-6f3baab8e757",
+ "metadata": {},
+ "source": [
+ "## Exercise"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6457c1d2-0530-435d-ae27-d0f1eeabe583",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Run a JRA-forced MOM6 control case
\n",
+ " \n",
+ "Create a case called **gmom_jra.run_length** using the compset ``GMOM_JRA`` at ``TL319_t232`` resolution. \n",
+ " \n",
+ "Set the run length to **5 year**. \n",
+ "\n",
+ "Build and run the model. Since this is a control case, we want to build it \"out of the box\" without any modifications. \n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e2e33a95-e93c-4aca-86d7-1a830cc0562c",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ "\n",
+ "**How do I check the composets with MOM6?**\n",
+ "\n",
+ "Go to [CESM2.2.0 Component Sets Definitions](https://docs.cesm.ucar.edu/models/cesm2/config/compsets.html) and look for \"MOM\" using the \"Search\" box.\n",
+ "\n",
+ "\n",
+ "**How do I compile?**\n",
+ "\n",
+ "You can compile with the command:\n",
+ " \n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "**How do I check my solution?**\n",
+ "\n",
+ "When your run is completed, go to the archive directory. \n",
+ "\n",
+ "(1) Check that your archive directory contains files *mom6.h.z*, *mom6.h.sfc*, etc\n",
+ "\n",
+ "\n",
+ "(2) Compare the contents of the ``h.z`` and ``h.sfc`` files using ``ncdump``.\n",
+ "\n",
+ "```\n",
+ "ncdump -h gmom_jra.run_length.mom6.h.z.0005-12.nc\n",
+ "ncdump -h gmom_jra.run_length.mom6.h.sfc.0005-12.nc\n",
+ "```\n",
+ "\n",
+ "(3) Look at the sizes of the files. \n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Create a new case gmom_jra.run_length with the command:\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/cesm2_3_beta17/cime/scripts/\n",
+ "./create_newcase --case /glade/u/home/$USER/cases/gmom_jra.run_length --compset GMOM_JRA --res TL319_t232 \n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd ~/cases/gmom_jra.run_length \n",
+ "./case.setup\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Change the run length:\n",
+ "``` \n",
+ "./xmlchange STOP_N=5,STOP_OPTION=nyears\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "If needed, change job queue \n",
+ "and account number. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=tutorial,PROJECT=UESM0013 --force\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "gmom_jra.run_length. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/gmom_jra.run_length/ocn/hist\n",
+ "\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "NPL 2023b",
+ "language": "python",
+ "name": "npl-2023b"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/paleo/exercise_1.ipynb b/_sources/notebooks/challenge/paleo/exercise_1.ipynb
new file mode 100644
index 000000000..9696d2ae9
--- /dev/null
+++ b/_sources/notebooks/challenge/paleo/exercise_1.ipynb
@@ -0,0 +1,223 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 1: Preindustrial control case\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "Exercise: Run a preindustrial control simulation
\n",
+ " \n",
+ "Create, configure, build and run a fully coupled preindustrial case called ``b.e21.B1850.f19_g17.piControl.001`` following [CESM naming conventions](https://www.cesm.ucar.edu/models/cesm2/naming-conventions). \n",
+ "\n",
+ "Run for 1 year. \n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "65b2cbda-2d54-48ee-898b-4c391f16ca79",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ "
\n",
+ "\n",
+ "**What is the compset for fully coupled preindustrial?**\n",
+ "\n",
+ "- ``B1850`` \n",
+ "\n",
+ "**What is the resolution for B1850?**\n",
+ "\n",
+ "- Use resolution ``f19_g17`` for fast throughput \n",
+ "\n",
+ "**Which XML variable should you change to tell the model to run for one year?**\n",
+ "\n",
+ "- Use ``STOP_OPTION`` and ``STOP_N`` \n",
+ "\n",
+ "**How to check if each XML variable is modified correctly?**\n",
+ "\n",
+ "- Use ``xmlquery -p`` \n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7dd602b7-372d-4f36-b6d1-df8e22ba1646",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ " \n",
+ "**# Set environment variables** \n",
+ "\n",
+ "Set environment variables with the commands:\n",
+ " \n",
+ "**For tcsh users** \n",
+ " \n",
+ "```\n",
+ "set CASENAME=b.e21.B1850.f19_g17.piControl.001\n",
+ "set CASEDIR=/glade/u/home/$USER/cases/$CASENAME\n",
+ "set RUNDIR=/glade/derecho/scratch/$USER/$CASENAME/run\n",
+ "set COMPSET=B1850\n",
+ "set RESOLUTION=f19_g17\n",
+ "set PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "Note: You should use the project number given for this tutorial.\n",
+ "\n",
+ "**For bash users** \n",
+ " \n",
+ "```\n",
+ "export CASENAME=b.e21.B1850.f19_g17.piControl.001\n",
+ "export CASEDIR=/glade/u/home/$USER/cases/$CASENAME\n",
+ "export RUNDIR=/glade/derecho/scratch/$USER/$CASENAME/run\n",
+ "export COMPSET=B1850\n",
+ "export RESOLUTION=f19_g17\n",
+ "export PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "Note: You should use the project number given for this tutorial.\n",
+ "\n",
+ "**# Make a case directory**\n",
+ "\n",
+ "If needed create a directory `cases` into your home directory:\n",
+ " \n",
+ "```\n",
+ "mkdir /glade/u/home/$USER/cases/\n",
+ "```\n",
+ " \n",
+ "\n",
+ "**# Create a new case**\n",
+ "\n",
+ "Create a new case with the command ``create_newcase``:\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_newcase --case $CASEDIR --res $RESOLUTION --compset $COMPSET --project $PROJECT\n",
+ "```\n",
+ "\n",
+ "**# Change the job queue**\n",
+ "\n",
+ "If needed, change ``job queue``.
\n",
+ "For instance, to run in the queue ``main``.\n",
+ "``` \n",
+ "cd $CASEDIR\n",
+ "./xmlchange JOB_QUEUE=main\n",
+ "```\n",
+ "This step can be redone at anytime in the process. \n",
+ "\n",
+ "**# Setup**\n",
+ "\n",
+ "Invoke ``case.setup`` with the command:\n",
+ "``` \n",
+ "cd $CASEDIR\n",
+ "./case.setup \n",
+ "``` \n",
+ "\n",
+ "You build the namelists with the command:\n",
+ "```\n",
+ "./preview_namelists\n",
+ "```\n",
+ "This step is optional as the script ``preview_namelists`` is automatically called by ``case.build`` and ``case.submit``. But it is nice to check that your changes made their way into:\n",
+ "```\n",
+ "$CASEDIR/CaseDocs/atm_in\n",
+ "```\n",
+ "\n",
+ "\n",
+ "**# Set run length**\n",
+ "\n",
+ "```\n",
+ "./xmlchange STOP_N=1,STOP_OPTION=nyears\n",
+ "```\n",
+ "\n",
+ "**# Build and submit**\n",
+ "\n",
+ "```\n",
+ "qcmd -A $PROJECT -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "------------\n",
+ "\n",
+ "**# Check on your run**\n",
+ "\n",
+ "\n",
+ "After submitting the job, use ``qstat -u $USER`` to check the status of your job. \n",
+ "It may take ~16 minutes to finish the one-year simulation. \n",
+ "\n",
+ "**# Check your solution**\n",
+ "\n",
+ "When the run is completed, look at the history files into the archive directory. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/$CASENAME/atm/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ "As your run is one-year, there should be 12 monthly files (``h0``) for each model component. \n",
+ "\n",
+ "\n",
+ "Success! Now let's look back into the past... \n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "472131c7-88f9-4863-a2bc-d7364333542d",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "815be0bc-515a-474b-a3dd-b7ba02831b9a",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/paleo/exercise_2.ipynb b/_sources/notebooks/challenge/paleo/exercise_2.ipynb
new file mode 100644
index 000000000..49a5caae4
--- /dev/null
+++ b/_sources/notebooks/challenge/paleo/exercise_2.ipynb
@@ -0,0 +1,330 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 2: mid-Holocene case \n",
+ "\n",
+ "The Holocene Epoch started ~11,700 before present (11.7 ka BP) and is the current period of geologic time. \n",
+ "\n",
+ "Although humans were already well established before the Holocene, this period of time is also referred to as the Anthropocene Epoch because its primary characteristic is the global changes caused by human activity.\n",
+ "\n",
+ "The Holocene is an interglacial period, marked by receding ice sheets and rising greenhouse gases that were accompanied by changes in the Earth's orbit around the Sun. \n",
+ "\n",
+ "Today, we will use CESM to investigate influence of Holocene orbital forcing on climate. \n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "Exercise: Run a mid-Holocene simulation with orbital forcing
\n",
+ " \n",
+ "Create, configure, build and run a fully coupled mid-Holocene (~6 ka BP) case called ``b.e21.B1850.f19_g17.midHolocene.001`` following [CESM naming conventions](https://www.cesm.ucar.edu/models/cesm2/naming-conventions). \n",
+ "\n",
+ "Run for 1 year. \n",
+ "\n",
+ "Compare and visualize differences between preindustrial and mid-Holocene runs using NCO and Ncview. \n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "65b2cbda-2d54-48ee-898b-4c391f16ca79",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ "
\n",
+ "\n",
+ "**What is the compset for fully coupled mid-Holocene run?**\n",
+ "\n",
+ "- Use ``B1850`` and modify preindustrial orbital configuration (no mid-Holocene compset available) \n",
+ "\n",
+ "**What is the resolution for B1850?**\n",
+ "\n",
+ "- Use resolution ``f19_g17`` for fast throughput \n",
+ "\n",
+ "**What was the orbital configuration 6 ka BP?**\n",
+ "\n",
+ "- According to Table 1 of [Otto-Bliesner et al., (2017)](https://doi.org/10.5194/gmd-10-3979-2017), Eccentricity = 0.018682, Obliquity (degrees) = 24.105, Perihelion = 0.87 (for simplicity, we don't consider the other forcings here, i.e., CO2) \n",
+ "\n",
+ "**How to modify orbital configuration in CESM world?**\n",
+ "\n",
+ "- Edit ``user_nl_cpl`` \n",
+ "- ``orb_mode = 'fixed_parameters'`` \n",
+ "- ``orb_eccen = 0.018682`` \n",
+ "- ``orb_obliq = 24.105`` \n",
+ "- ``orb_mvelp = 0.87`` \n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7dd602b7-372d-4f36-b6d1-df8e22ba1646",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ " \n",
+ "**# Set environment variables** \n",
+ "\n",
+ "Set environment variables with the commands:\n",
+ " \n",
+ "**For tcsh users** \n",
+ " \n",
+ "```\n",
+ "set CASENAME=b.e21.B1850.f19_g17.midHolocene.001\n",
+ "set CASEDIR=/glade/u/home/$USER/cases/$CASENAME\n",
+ "set RUNDIR=/glade/derecho/scratch/$USER/$CASENAME/run\n",
+ "set COMPSET=B1850\n",
+ "set RESOLUTION=f19_g17\n",
+ "set PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "You should use the project number given for this tutorial.\n",
+ "\n",
+ "**For bash users** \n",
+ " \n",
+ "```\n",
+ "export CASENAME=b.e21.B1850.f19_g17.midHolocene.001\n",
+ "export CASEDIR=/glade/u/home/$USER/cases/$CASENAME\n",
+ "export RUNDIR=/glade/derecho/scratch/$USER/$CASENAME/run\n",
+ "export COMPSET=B1850\n",
+ "export RESOLUTION=f19_g17\n",
+ "export PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "You should use the project number given for this tutorial.\n",
+ "\n",
+ "**# Make a case directory**\n",
+ "\n",
+ "If needed create a directory `cases` into your home directory:\n",
+ " \n",
+ "```\n",
+ "mkdir /glade/u/home/$USER/cases/\n",
+ "```\n",
+ " \n",
+ "\n",
+ "**# Create a new case**\n",
+ "\n",
+ "Create a new case with the command ``create_newcase``:\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_newcase --case $CASEDIR --res $RESOLUTION --compset $COMPSET --project $PROJECT\n",
+ "```\n",
+ "\n",
+ "**# Change the job queue**\n",
+ "\n",
+ "If needed, change ``job queue``.
\n",
+ "For instance, to run in the queue ``main``.\n",
+ "``` \n",
+ "cd $CASEDIR\n",
+ "./xmlchange JOB_QUEUE=main\n",
+ "```\n",
+ "This step can be redone at anytime in the process. \n",
+ "\n",
+ "**# Setup**\n",
+ "\n",
+ "Invoke ``case.setup`` with the command:\n",
+ "``` \n",
+ "cd $CASEDIR\n",
+ "./case.setup \n",
+ "``` \n",
+ "\n",
+ "You build the namelists with the command:\n",
+ "```\n",
+ "./preview_namelists\n",
+ "```\n",
+ "This step is optional as the script ``preview_namelists`` is automatically called by ``case.build`` and ``case.submit``. But it is nice to check that your changes made their way into:\n",
+ "```\n",
+ "$CASEDIR/CaseDocs/atm_in\n",
+ "```\n",
+ "\n",
+ "\n",
+ "**# Set run length**\n",
+ "\n",
+ "```\n",
+ "./xmlchange STOP_N=1,STOP_OPTION=nyears\n",
+ "```\n",
+ "\n",
+ "\n",
+ "**# Add the following to user_nl_cpl**\n",
+ "\n",
+ "```\n",
+ "orb_mode = 'fixed_parameters' \n",
+ " orb_eccen = 0.018682\n",
+ " orb_obliq = 24.105\n",
+ " orb_mvelp = 0.87\n",
+ "```\n",
+ "\n",
+ "\n",
+ "**# Build and submit**\n",
+ "\n",
+ "```\n",
+ "qcmd -A $PROJECT -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "------------\n",
+ "\n",
+ "\n",
+ "**# Validate your simulation setup**\n",
+ "\n",
+ "\n",
+ "(1) If you want to check the log file, ``cpl.log.xxx``, in the Run Directory (when model is still running) or in your Storage Directory (when the simulation and archiving have finished). \n",
+ "\n",
+ "Note: The ``less`` command in Linux is a terminal pager program used to view (but not change) the contents of a text file one screen at a time. It is particularly useful for large files, as it does not need to read the entire file before starting, hence it loads large files faster than editors like vi or emacs. \n",
+ "\n",
+ "To skip to the bottom of the file, press `` + g`` \n",
+ "To stop viewing the contents of the file with ``less``, press ``q``. \n",
+ "```\n",
+ "less /glade/derecho/scratch/$USER/$CASENAME/run/cpl.log.* \n",
+ "less /glade/derecho/scratch/$USER/archive/$CASENAME/logs/cpl.log.*.gz \n",
+ "```\n",
+ "\n",
+ "\n",
+ "Alternatively, use the real-time monitoring mode with ``less`` that you can activate with the ``+F`` (forward) option. Now, new lines will be continuously displayed as they are added to the file during the run. \n",
+ "To exit forward mode and revert to the standard interactive mode of less, press `` + C``. \n",
+ "```\n",
+ "less +F /glade/derecho/scratch/$USER/$CASENAME/run/cpl.log.* \n",
+ "```\n",
+ "\n",
+ "\n",
+ "(2) Type ``/orb_params`` to search, you should see the following \n",
+ "```\n",
+ " (shr_orb_params) Calculate characteristics of the orbit:\n",
+ " (shr_orb_params) Calculate orbit for year: -4050\n",
+ " (shr_orb_params) ------ Computed Orbital Parameters ------\n",
+ " (shr_orb_params) Eccentricity = 1.868182E-02\n",
+ " (shr_orb_params) Obliquity (deg) = 2.410538E+01\n",
+ " (shr_orb_params) Obliquity (rad) = 4.207183E-01\n",
+ " (shr_orb_params) Long of perh(deg) = 8.696128E-01\n",
+ " (shr_orb_params) Long of perh(rad) = 3.156770E+00\n",
+ " (shr_orb_params) Long at v.e.(rad) = -5.751115E-04\n",
+ "```\n",
+ "\n",
+ "**# Check your solution**\n",
+ "\n",
+ "When the run is completed, look at the history files into the archive directory. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/$CASENAME/atm/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ "As your run is one-year, there should be 12 monthly files (``h0``) for each model component. \n",
+ "\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "65b2cbda-2d54-48ee-898b-4c391f16ca79",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ " Click here to visualize results
\n",
+ "
\n",
+ "\n",
+ "**# Use Ncview to visualize solar insolation**\n",
+ "\n",
+ "Earth's orbital configuration influences incoming solar insolation.\n",
+ "Take a look at the ``SOLIN`` CAM variable for August in the pre-industrial and mid-Holocene runs.\n",
+ "``` \n",
+ "module load ncview\n",
+ "cd /glade/derecho/scratch/$USER/archive\n",
+ "ncview b.e21.B1850.f19_g17.piControl.001/atm/hist/b.e21.B1850.f19_g17.piControl.001.cam.h0.0001-08.nc b.e21.B1850.f19_g17.midHolocene.001/atm/hist/b.e21.B1850.f19_g17.midHolocene.001.cam.h0.0001-08.nc\n",
+ "```\n",
+ "\n",
+ "Using the right arrow button in the Ncview window, you can toggle between pre-industrial and mid-Holocene August ``SOLIN`` and other variables. \n",
+ "\n",
+ "\n",
+ "A few side notes on comparing pre-industrial and mid-Holocene runs: \n",
+ "- Changes in Earth's orbit alter the length of months or seasons over time, this is referred to as the 'paleo calendar effect' \n",
+ "- This means that the modern fixed-length definition of months do not apply when the Earth traversed different portions of its orbit \n",
+ "- Tools exist to adjust monthly CESM output to account for the 'paleo calendar effect' \n",
+ "- See [PaleoCalAdjust tool](https://github.com/CESM-Development/paleoToolkit/tree/master/PaleoCalAdjust) from [Bartlein & Shafer et al. (2019)](https://doi.org/10.5194/gmd-12-3889-2019) for more information \n",
+ "- For simplicity, we assume in this exercise that the definition of months is the same for the pre-industrial and mid-Holocene \n",
+ "\n",
+ "Now, let's take a look at the differences between the two cases more clearly using NCO. \n",
+ "\n",
+ "``` \n",
+ "module load nco\n",
+ "cd /glade/derecho/scratch/$USER/archive\n",
+ "ncdiff b.e21.B1850.f19_g17.piControl.001/atm/hist/b.e21.B1850.f19_g17.midHolocene.001.cam.h0.0001-08.nc b.e21.B1850.f19_g17.piControl.001/atm/hist/b.e21.B1850.f19_g17.piControl.001.cam.h0.0001-08.nc diff_MH-PI_0001-08.nc \n",
+ "ncview diff_MH-PI_0001-08.nc \n",
+ "```\n",
+ "\n",
+ "Note: Running ncdiff this way will place ``diff_MH-PI_0001-08.nc`` in your archive directory. You may use ``mv`` to move ``diff_MH-PI_0001-08.nc`` to another directory. \n",
+ "\n",
+ "**# Questions for reflection:**\n",
+ "- Which orbital parameters are different at the middle Holocene (6 ka BP)? \n",
+ "- How does the orbital parameter impact the top-of-atmosphere shortwave radiation (solar insolation) during summertime in the Northern Hemisphere? \n",
+ "- Do the results look correct? You can compare your results with Figure 3b of [Otto-Bliesner et al., (2017)](https://doi.org/10.5194/gmd-10-3979-2017) \n",
+ "- What other aspects of climate are different between the mid-Holocene and pre-industrial runs?",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "472131c7-88f9-4863-a2bc-d7364333542d",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "815be0bc-515a-474b-a3dd-b7ba02831b9a",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/paleo/exercise_3.ipynb b/_sources/notebooks/challenge/paleo/exercise_3.ipynb
new file mode 100644
index 000000000..08f91b386
--- /dev/null
+++ b/_sources/notebooks/challenge/paleo/exercise_3.ipynb
@@ -0,0 +1,349 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 3: Water isotope tracers in CESM\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "Exercise: Run a preindustrial simulation with water isotope tracers
\n",
+ "\n",
+ "Download (`git clone`) isotope-enabled CESM1.3 (iCESM1.3; [code available here](https://github.com/NCAR/iCESM1.3_iHESP_hires)), as the version of CESM used in this tutorial does not include water isotope capabilities. \n",
+ "\n",
+ "Create, configure, build and run a fully coupled preindustrial case called ``b.e13.B1850.f19_g17.piControl.001`` following [CESM naming conventions](https://www.cesm.ucar.edu/models/cesm2/naming-conventions) including water isotope tracers. \n",
+ "\n",
+ "Run for 1 year. \n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "65b2cbda-2d54-48ee-898b-4c391f16ca79",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ "
\n",
+ "\n",
+ "**What is the resolution for B1850?**\n",
+ "\n",
+ "- Use resolution ``f19_g17`` for fast throughput \n",
+ "\n",
+ "**Which XML variable should you change to tell the model to run for one year?**\n",
+ "\n",
+ "- Use ``STOP_OPTION`` and ``STOP_N`` \n",
+ "\n",
+ "**How to check if each XML variable is modified correctly?**\n",
+ "\n",
+ "- Use ``xmlquery -p`` \n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7dd602b7-372d-4f36-b6d1-df8e22ba1646",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "**# Download iCESM1.3 code** \n",
+ "\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code \n",
+ "git clone https://github.com/NCAR/iCESM1.3_iHESP_hires iCESM1.3_iHESP_hires \n",
+ "cd iCESM1.3_iHESP_hires \n",
+ "./manage_externals/checkout_externals \n",
+ "```\n",
+ "\n",
+ " \n",
+ "**# Set environment variables** \n",
+ "\n",
+ "Set environment variables with the commands:\n",
+ " \n",
+ "**For tcsh users** \n",
+ " \n",
+ "```\n",
+ "set CASENAME=b.e13.B1850C5.f19_g16.piControl.001\n",
+ "set CASEDIR=/glade/u/home/$USER/cases/$CASENAME\n",
+ "set RUNDIR=/glade/derecho/scratch/$USER/$CASENAME/run\n",
+ "set COMPSET=B1850C5\n",
+ "set RESOLUTION=f19_g16\n",
+ "set PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "Note: You should use the project number given for this tutorial.\n",
+ "\n",
+ "**For bash users** \n",
+ " \n",
+ "```\n",
+ "export CASENAME=b.e13.B1850C5.f19_g16.piControl.001\n",
+ "export CASEDIR=/glade/u/home/$USER/cases/$CASENAME\n",
+ "export RUNDIR=/glade/derecho/scratch/$USER/$CASENAME/run\n",
+ "export COMPSET=B1850C5\n",
+ "export RESOLUTION=f19_g16\n",
+ "export PROJECT=UESM0013\n",
+ "```\n",
+ "\n",
+ "Note: You should use the project number given for this tutorial.\n",
+ "\n",
+ "**# Make a case directory**\n",
+ "\n",
+ "If needed create a directory `cases` into your home directory:\n",
+ " \n",
+ "```\n",
+ "mkdir /glade/u/home/$USER/cases/\n",
+ "```\n",
+ " \n",
+ "\n",
+ "**# Create a new case**\n",
+ "\n",
+ "Create a new case with the command ``create_newcase``:\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/iCESM1.3_iHESP_hires/cime/scripts/\n",
+ "./create_newcase --case $CASEDIR --res $RESOLUTION --compset $COMPSET --project $PROJECT --run-unsupported \n",
+ "```\n",
+ "\n",
+ "**# Change the job queue**\n",
+ "\n",
+ "If needed, change ``job queue``.
\n",
+ "For instance, to run in the queue ``main``.\n",
+ "``` \n",
+ "cd $CASEDIR\n",
+ "./xmlchange JOB_QUEUE=main\n",
+ "```\n",
+ "This step can be redone at anytime in the process before running `case.submit`. \n",
+ "\n",
+ "**# Setup**\n",
+ "\n",
+ "Invoke ``case.setup`` with the command:\n",
+ "``` \n",
+ "cd $CASEDIR\n",
+ "./case.setup \n",
+ "``` \n",
+ "\n",
+ "You build the namelists with the command:\n",
+ "```\n",
+ "./preview_namelists\n",
+ "```\n",
+ "This step is optional as the script ``preview_namelists`` is automatically called by ``case.build`` and ``case.submit``. But it is nice to check that your changes made their way into:\n",
+ "```\n",
+ "$CASEDIR/CaseDocs/atm_in\n",
+ "```\n",
+ "\n",
+ "\n",
+ "**# Set run length**\n",
+ "\n",
+ "```\n",
+ "./xmlchange STOP_N=1,STOP_OPTION=nyears\n",
+ "```\n",
+ "\n",
+ "**# Build the run**\n",
+ "\n",
+ "```\n",
+ "qcmd -A $PROJECT -- ./case.build\n",
+ "```\n",
+ "\n",
+ "**# Which namelist variables enable water isotope tracers?**\n",
+ "\n",
+ "- Notice that the steps to set up this isotope-enabled preindustrial simulation are very similar to a preindustrial simulation without isotopes (e.g., Paleo Exercise 1) \n",
+ "- In iCESM1.3, it is assumed you will run with water isotope tracers so each compset include isotope settings by default \n",
+ "- Use ``./xmlquery`` to explore how ``FLDS_WISO``, ``CAM_CONFIG_OPTS``, and ``OCN_TRACER_MODULES`` differ between this iCESM1.3 case and that of Exercise 1 \n",
+ "- Also, take a look at namelist settings for each CESM component with variables that contain ``wiso`` in ``$CASEDIR/Buildconf`` \n",
+ "\n",
+ "\n",
+ "**# Submit the run**\n",
+ "\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ "------------\n",
+ "\n",
+ "**# Check on your run**\n",
+ "\n",
+ "\n",
+ "After submitting the job, use ``qstat -u $USER`` to check the status of your job. \n",
+ "\n",
+ "**# Check your solution**\n",
+ "\n",
+ "When the run is completed, look at the history files into the archive directory. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/$CASENAME/atm/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ "As your run is one-year, there should be 12 monthly files (``h0``) for each model component. \n",
+ "\n",
+ "\n",
+ "Success! Let's plot the results. \n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "06cb01fc",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "\n",
+ " Click here to visualize results
\n",
+ "
\n",
+ "\n",
+ "**--------------- Option 1 ---------------** \n",
+ "\n",
+ "**# Use NCO to calculate the oxygen isotopic composition of precipitation**\n",
+ "\n",
+ "The ratio of heavy ($^{18}\\text{O}$) to light ($^{16}\\text{O}$)) isotopes are most commonly expressed relative to a standard in delta (δ) notation: \n",
+ "\n",
+ "$$ \\delta^{18}\\text{O} = \\frac{R_{\\text{sample}} - R_{\\text{std}}}{R_{\\text{std}}} \\times 1000‰ $$\n",
+ "\n",
+ "where \n",
+ "- $R_{\\text{sample}}$ = ratio of $^{18}\\text{O}$ to $^{16}\\text{O}$ in sample \n",
+ "- $R_{\\text{std}}$ = ratio of $^{18}\\text{O}$ to $^{16}\\text{O}$ in a standard \n",
+ "\n",
+ "Thus, the $\\delta^{18}\\text{O}$ of a sample which is identical to the standard would be 0‰, positive values indicate a greater proportion of $^{18}\\text{O}$ than the standard, and negative values indicate a lower proportion of $^{18}\\text{O}$ . \n",
+ "\n",
+ "In isotope-enabled CESM, the relative abundances of $^{16}\\text{O}$ and $^{18}\\text{O}$ are already adjusted to their naturally occurring global abundances (99.757% and 0.205%, respectively), so we do not include $R_{\\text{std}}$ in the calculation of $\\delta^{18}\\text{O}$. Rather, isotope variables in CESM are expressed in delta (δ) notation as: \n",
+ "\n",
+ "\n",
+ "$$ \\delta^{18}O = (\\frac{\\text{PRECRC_H218Or} + \\text{PRECSC_H218Os} + \\text{PRECRL_H218OR} + \\text{PRECSL_H218OS}}{\\text{PRECRC_H216Or} + \\text{PRECSC_H216Os} + \\text{PRECRL_H216OR} + \\text{PRECSL_H216OS}} - 1) \\times 1000‰ $$\n",
+ "\n",
+ "\n",
+ "- Use ``ncdump /glade/derecho/scratch/$USER/$CASENAME/atm/hist/$CASENAME.cam.h0.0001-01.nc | less`` to check the definition of each isotope variable above \n",
+ "- For example, search for ``PRECRC_H218Or`` using ``/PRECRC_H218Or + ``) \n",
+ "\n",
+ "To calculate the δ18O of precipitation from the simulation using NCO, \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/$CASENAME/atm/hist\n",
+ "ncap2 -s 'd18Op=((PRECRC_H218Or+PRECSC_H218Os+PRECRL_H218OR+PRECSL_H218OS)/(PRECRC_H216Or+PRECSC_H216Os+PRECRL_H216OR+PRECSL_H216OS) - 1)*1000.' -v $CASENAME.cam.h0.0001-12.nc d18Op.$CASENAME.cam.h0.0001-12.nc \n",
+ "```\n",
+ "\n",
+ "**# Use Ncview to visualize precipitation $\\delta^{18}\\text{O}$**\n",
+ "\n",
+ "Earth's orbital configuration influences incoming solar insolation.\n",
+ "Take a look at the ``d18Op`` variable we calculated for 1 month in the pre-industrial run.\n",
+ "``` \n",
+ "module load ncview\n",
+ "ncview d18Op.$CASENAME.cam.h0.0001-12.nc \n",
+ "```\n",
+ "\n",
+ "**--------------- Option 2 ---------------** \n",
+ "\n",
+ "**# Use Python to calculate and plot the oxygen isotopic composition of precipitation**\n",
+ "\n",
+ "The following Python code will produce a plot of precipitation δ18O for 1 month. \n",
+ "\n",
+ "``` \n",
+ "import xarray as xr \n",
+ "import numpy as np \n",
+ "import matplotlib.pyplot as plt \n",
+ "import cartopy.crs as ccrs \n",
+ "from cartopy.util import add_cyclic_point \n",
+ "\n",
+ "def calculate_d18Op(ds): \n",
+ " # Compute precipitation δ18O with iCESM output \n",
+ " \n",
+ " # Parameters \n",
+ " # ds: xarray.Dataset contains necessary variables \n",
+ " \n",
+ " # Returns \n",
+ " # ds: xarray.Dataset with δ18O added \n",
+ " \n",
+ " # convective & large-scale rain and snow, respectively \n",
+ " p16O = ds.PRECRC_H216Or + ds.PRECSC_H216Os + ds.PRECRL_H216OR + ds.PRECSL_H216OS \n",
+ " p18O = ds.PRECRC_H218Or + ds.PRECSC_H218Os + ds.PRECRL_H218OR + ds.PRECSL_H218OS \n",
+ " \n",
+ " # avoid dividing by small number here \n",
+ " p18O = p18O.where(p16O > 1.E-18, 1.E-18) \n",
+ " p16O = p16O.where(p16O > 1.E-18, 1.E-18) \n",
+ " d18O = (p18O / p16O - 1.0) * 1000.0 \n",
+ " \n",
+ " ds['p16O'] = p16O \n",
+ " ds['p18O'] = p18O \n",
+ " ds['d18O'] = d18O \n",
+ " return ds \n",
+ "\n",
+ "# Read in monthly file \n",
+ "case = 'b.e13.B1850C5.f19_g16.piControl.001' \n",
+ "file = '/glade/derecho/scratch/macarew/archive/'+case+'/atm/hist/'+case+'.cam.h0.0001-12.nc' \n",
+ "ds = xr.open_mfdataset(file, parallel=True, \n",
+ " data_vars='minimal', \n",
+ " coords='minimal', \n",
+ " compat='override') \n",
+ "\n",
+ "# Call function to add preciptation d18O to dataset \n",
+ "ds = calculate_d18Op(ds) \n",
+ "\n",
+ "fig, ax = plt.subplots( \n",
+ " nrows=1, ncols=1, \n",
+ " figsize=(6, 2), \n",
+ " subplot_kw={'projection': ccrs.Robinson(central_longitude=210)}, \n",
+ " constrained_layout=True) \n",
+ "\n",
+ "d18O_new, lon_new = add_cyclic_point(ds.d18O[0,:,:], ds.lon) \n",
+ "\n",
+ "# Plot model results using contourf \n",
+ "p0 = ax.contourf(lon_new, ds.lat, d18O_new, \n",
+ " levels=np.linspace(-30, 0, 16), extend='both', \n",
+ " transform=ccrs.PlateCarree()) \n",
+ "\n",
+ "plt.colorbar(p0, ax=ax) \n",
+ "ax.set_title('Dec δ18Op of PI') \n",
+ "ax.coastlines(linewidth=0.5) \n",
+ "```\n",
+ "\n",
+ "\n",
+ "**--------------- Questions for reflection ---------------** \n",
+ "- Do you notice any spatial patterns in precipitation $\\delta^{18}\\text{O}$? \n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/paleo/paleo.ipynb b/_sources/notebooks/challenge/paleo/paleo.ipynb
new file mode 100644
index 000000000..f0c87bef9
--- /dev/null
+++ b/_sources/notebooks/challenge/paleo/paleo.ipynb
@@ -0,0 +1,115 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# Paleo \n",
+ "Paleoclimatology is the study of ancient climate variability and change, before the availability of instrumental records. \n",
+ "\n",
+ "Paleoclimatology relies on a combination of physical, biological, and chemical proxies of past environmental and climate change, such as glacial ice, tree rings, sediments, corals, and cave mineral deposits. \n",
+ "\n",
+ "CESM is widely used for paleoclimate studies. \n",
+ "\n",
+ "CESM simulations of past climates are a tool to better understand and interpret proxy reconstructions and to evaluate CESM skill in simulating out-of-sample climate states. \n",
+ "\n",
+ "Many proxy reconstructions are made using measurements of isotopic ratios in natural archives. \n",
+ "\n",
+ "A version of CESM with the capability to simulate hydrogen and oxygen isotope ratios in the water cycle (isotope-enabled CESM) is commonly used in paleoclimate studies to provide more direct signals of comparison with proxies. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Learning Goals"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "346cbd7b-3b8e-41f0-b120-b369ab20f6cc",
+ "metadata": {},
+ "source": [
+ "- Student will learn how to modify Earth's orbital configuration in CESM for a simple paleoclimate experiment. \n",
+ "- Student will learn how to validate that the orbital modification is implemented properly. \n",
+ "- Student will learn how to quickly compare differences in paleo and preindustrial CESM runs using NCO and Ncview. \n",
+ "- Student will learn how to run a preindustrial isotope-enabled CESM experiment and plot precipitation δ18O. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Exercise 1-2 Details"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "346cbd7b-3b8e-41f0-b120-b369ab20f6cc",
+ "metadata": {},
+ "source": [
+ "- This exercise uses the same code base as the rest of the tutorial. \n",
+ "- You will be using the B1850 compset at the f19_g17 resolution. \n",
+ "- You will run a preindustrial control simulation and a simple mid-Holocene simulation. \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Exercise 3 Details"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "346cbd7b-3b8e-41f0-b120-b369ab20f6cc",
+ "metadata": {},
+ "source": [
+ "- This exercise uses a different code base from the rest of the tutorial (isotope-enabled CESM1.3). \n",
+ "- You will be using the B1850C5 compset at the f19_g16 resolution. \n",
+ "- You will run a preindustrial simulation with water isotope tracers. \n",
+ "\n",
+ "![Water isotope partitioning](../../../images/challenge/Precip_isotope_Cartoon.jpg)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/pop/pop.ipynb b/_sources/notebooks/challenge/pop/pop.ipynb
new file mode 100644
index 000000000..3f1b58099
--- /dev/null
+++ b/_sources/notebooks/challenge/pop/pop.ipynb
@@ -0,0 +1,249 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# Ocean \n",
+ "\n",
+ "The default ocean component of CESM is the Parallel Ocean Program (POP). This will change in CESM3, where the default ocean component will be the Modular Ocean Model version 6 (MOM6). You will have the option to run a case using MOM6 in the last challenge exercise.\n",
+ "\n",
+ "It can be helpful for people interested in ocean science to run simulations with only active sea ice and ocean components and atmospheric forcing. This exercise will teach you how to run one of these ice-ocean simulations.\n",
+ "\n",
+ "This exercise was created by Gustavo Marques."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "45a57a9d-99e1-48c2-a365-b09f3aa40ec0",
+ "metadata": {},
+ "source": [
+ "## Learning Goals"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a39c7159-f7ee-4515-920f-68a8d345e392",
+ "metadata": {},
+ "source": [
+ "- Student will learn what a G compset is, the types of forcing available to run one, and how to run one.\n",
+ "- Student will learn how to make a namelist modification that turns off the overflow parameterization and compare results with a control experiment.\n",
+ "- Student will learn how to make a source code modification that changes zonal wind stress and compare results with a control experiment.\n",
+ "- Student will learn what a G1850ECO compset is and compare it to the G compset.\n",
+ "- Student will learn how to run a G compset using MOM6."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6bcc23d6-04c4-49b2-a809-15badc7b5ff9",
+ "metadata": {},
+ "source": [
+ "## Exercise Details"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "59f7b9fd-7a3d-4b54-b874-61ddc264b102",
+ "metadata": {},
+ "source": [
+ "- All exercises except the last one (\"5 - Control case using MOM6\") use the same code base as the rest of the tutorial. \n",
+ "- You will be using the G compset at the T62_g37 resolution (or TL319_t232 when using MOM6).\n",
+ "- You will run a control simulation and three experimental simulations. Each simulation will be run for one year. \n",
+ "- You will then use 'ncview' \\([http://meteora.ucsd.edu/~pierce/ncview_home_page.html](http://meteora.ucsd.edu/~pierce/ncview_home_page.html)\\) to evaluate how the experiments differ from the control simulation."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f1ed4850-1e61-4b03-b036-69ecaa06f23f",
+ "metadata": {},
+ "source": [
+ "## Useful POP references"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "27190b16-2c11-40a1-94fc-09fe0fbb1a57",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[CESM POP User's Guide](https://www.cesm.ucar.edu/models/pop)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9f4fecc3-e03e-4d35-aecb-7daa16a9acb0",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "\n",
+ "\n",
+ "[CESM POP Discussion Forum](https://bb.cgd.ucar.edu/cesm/forums/pop.136/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "16849059-810c-4af7-8930-8af58fe75c11",
+ "metadata": {},
+ "source": [
+ "## Useful MOM6 references"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bef0c8ec-0e3d-4fae-ab91-25f92d96df91",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[MOM6 Examples Wiki](https://github.com/NOAA-GFDL/MOM6-examples/wiki/Home)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "58915900-4d8b-415f-9a47-6bdbf9a8e701",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[MOM6’s documentation](https://mom6.readthedocs.io/en/main/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b580f76f-36cc-487e-acdd-0f1851661ef2",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[CESM MOM6 Discussion Forum](https://bb.cgd.ucar.edu/cesm/forums/mom6.148/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c082b63d-a408-4b01-8fe8-c446d25a1c91",
+ "metadata": {},
+ "source": [
+ "## What is a G case?"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9ad378a2-89e1-4afe-ad88-e0c0759b9864",
+ "metadata": {},
+ "source": [
+ "The G compset has active and coupled ocean and sea-ice components. The G compset requires boundary forcing from the atmosphere. The G compset is forced with atmospheric data that does not change interactively as the ocean and sea-ice evolve in time. The land and land ice are not active during a G compset experiment and the runoff is specified. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3ce9e152-c915-4e18-8199-040a26cf68c5",
+ "metadata": {},
+ "source": [
+ "![gcase](../../../images/challenge/gcase.png)\n",
+ "\n",
+ "* Figure: G compset definition.
*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "346ef398-2703-4990-9387-d9006e75c5e6",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[Component Set Definitions](https://www2.cesm.ucar.edu/models/cesm2/config/compsets.html)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cecd306b-bc35-48e2-8b47-fec1362616cc",
+ "metadata": {},
+ "source": [
+ "## G Compset forcing data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6e0b74a-4578-40b3-8af1-920e6bacffc4",
+ "metadata": {},
+ "source": [
+ "There are two types of temporal forcing for G compsets:\n",
+ "- Normal Year Forcing (NYF) is 12 months of atmospheric data (like a climatology) that repeats every year. NYF is the default forcing.\n",
+ "- Interannual varying forcing (GIAF) is forcing that varies by year over the time period (1948-2017). \n",
+ "\n",
+ "There are two datasets that can be used for G compsets:\n",
+ "- JRA55-do atmospheric data \\([Tsujino et al. 2018](https://doi.org/10.1016/j.ocemod.2018.07.002)\\)\n",
+ "- Coordinated Ocean-ice Reference Experiments (CORE) version 2 atmospheric data \\([Large and Yeager 2009](http://doi.org/10.1007/s00382-008-0441-3)\\).\n",
+ "\n",
+ "In these exercises we will use the CORE NYF."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e77543f2-6f2a-4d29-8919-827a2d7f96e6",
+ "metadata": {},
+ "source": [
+ "## Post processing and viewing your output"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "221e2616-682c-44e5-835d-0fce3603555d",
+ "metadata": {},
+ "source": [
+ "1) You can create an annual average of the first year's data for each simulationg using the `ncra` (netCDF averager) command from the netCDF operators package \\([NCO](https://nco.sourceforge.net/)\\). \n",
+ "```\n",
+ "ncra $OUTPUT_DIR/*.pop.h.0001*nc $CASENAME.pop.h.0001.nc\n",
+ "```\n",
+ "\n",
+ "2) Create a file that contains differences between each of the experiments and the control simulation\n",
+ "```\n",
+ "ncdiff $CASENAME.pop.h.0001.nc $CONTROLCASE.pop.h.0001.nc $CASENAME_diff.nc\n",
+ "```\n",
+ "\n",
+ "3) Examine variables within each annual mean and the difference files using `ncview`\n",
+ "```\n",
+ "ncview $CASENAME_diff.nc\n",
+ "```\n",
+ "\n",
+ "4) You can also look at other monthly-mean outputs or component log files."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/pop/pop_exercise_1.ipynb b/_sources/notebooks/challenge/pop/pop_exercise_1.ipynb
new file mode 100644
index 000000000..5e41c6ba1
--- /dev/null
+++ b/_sources/notebooks/challenge/pop/pop_exercise_1.ipynb
@@ -0,0 +1,170 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 1: Control case"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0bdbbd2b-8255-44f3-8c8c-da725d26f845",
+ "metadata": {},
+ "source": [
+ "**NOTE:** Building the control case for the POP challenge exercises is idential to building the control case in the CICE challenge exercises. If you have already completed the CICE challenge exercises you can skip this step."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6457c1d2-0530-435d-ae27-d0f1eeabe583",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Run a control case
\n",
+ " \n",
+ "Create a case called **g_control** using the compset ``G`` at ``T62_g37`` resolution. \n",
+ " \n",
+ "Set the run length to **1 year**. \n",
+ "\n",
+ "Build and run the model. Since this is a control case, we want to build it \"out of the box\" without any modifications. \n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e2e33a95-e93c-4aca-86d7-1a830cc0562c",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ " \n",
+ "**How do I compile?**\n",
+ "\n",
+ "You can compile with the command:\n",
+ " \n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "\n",
+ "**How do I control the output?**\n",
+ "\n",
+ "Check the following links:\n",
+ "\n",
+ "* https://www2.cesm.ucar.edu/models/cesm1.2/pop2/doc/faq/#output_tavg_add1\n",
+ "* https://www2.cesm.ucar.edu/models/cesm1.2/pop2/doc/faq/#output_tavg_add2\n",
+ "\n",
+ "**How do I check my solution?**\n",
+ "\n",
+ "When your run is completed, go to the archive directory. \n",
+ "\n",
+ "(1) Check that your archive directory contains files *pop.h.*, *pop.h.nday1*, etc\n",
+ "\n",
+ "\n",
+ "(2) Compare the contents of the ``h`` and ``h.nday1`` files using ``ncdump``.\n",
+ "\n",
+ "```\n",
+ "ncdump -h gpop.pop.h.0001-01-01-00000.nc\n",
+ "ncdump -h gpop.pop.h.nday1.0001-01-01-00000.nc\n",
+ "```\n",
+ "\n",
+ "(3) Look at the sizes of the files. \n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Create a new case g_control with the command:\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_newcase --case /glade/u/home/$USER/cases/g_control --compset G --res T62_g37 \n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd ~/cases/g_control \n",
+ "./case.setup\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Change the run length:\n",
+ "``` \n",
+ "./xmlchange STOP_N=1,STOP_OPTION=nyears\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "If needed, change job queue \n",
+ "and account number. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=tutorial,PROJECT=UESM0013 --force\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "
\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "g_control. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/g_control/ocn/hist\n",
+ "\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "dabace0e-c3f2-4c88-b77d-4b28828c0944",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "NPL 2023b",
+ "language": "python",
+ "name": "npl-2023b"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/pop/pop_exercise_2.ipynb b/_sources/notebooks/challenge/pop/pop_exercise_2.ipynb
new file mode 100644
index 000000000..be758ad81
--- /dev/null
+++ b/_sources/notebooks/challenge/pop/pop_exercise_2.ipynb
@@ -0,0 +1,175 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 2: Turn off parameterization"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "33cdee65-f03f-4c72-adfe-b5ce02416d12",
+ "metadata": {},
+ "source": [
+ "Oceanic overflows are dense currents originating in semienclosed basins or continental shelves. They contribute to the formation of abyssal waters and play a crucial role in large-scale ocean circulation. When these dense currents flow down the continental slope, they undergo intense mixing with the surrounding (ambient) ocean waters, causing significant changes in their density and transport (see figure below). However, these mixing processes occur on scales that are smaller than what ocean climate models can accurately capture, leading to poor simulations of deep waters and deep western boundary currents. To improve the representation of overflows some ocean climate models rely on overflow paramterizations, such as the one developed for the POP model (check [this](https://echorock.cgd.ucar.edu/staff/gokhan/OFP_Tech_Note.pdf) report for additional information). \n",
+ "\n",
+ "![overflows](../../../images/challenge/overflows.png)\n",
+ "\n",
+ "\n",
+ "* Figure: Physical processes acting in overflows (from [Legg et al., 2009](https://doi-org.cuucar.idm.oclc.org/10.1175/2008BAMS2667.1))
*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Turn off overflow parameterization
\n",
+ " \n",
+ "Create a case called **g_overflows** by cloning the control experiment case. \n",
+ " \n",
+ "Verify that the run length is set to **1 year**. \n",
+ "\n",
+ "In user_nl_pop make the following modifications:``overflows_on = .false.`` and ``overflows_interactive = .false.``\n",
+ "\n",
+ "Build and run the model for one year. \n",
+ "\n",
+ "Compare the simulations using ncview/ncdiff, etc.\n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e2e33a95-e93c-4aca-86d7-1a830cc0562c",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ " \n",
+ "**How do I compile and run?**\n",
+ "\n",
+ "You can compile with the command:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "\n",
+ "You can run with the command:\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ " \n",
+ "**How do I check the lenght of the run?**\n",
+ "\n",
+ "Use ```xmlquery``` to search for the variables that control the run length\n",
+ "\n",
+ " \n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Clone a new case g_overflows from your control experiment with the command:\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_clone --case /glade/u/home/$USER/cases/g_overflows --clone /glade/u/home/$USER/cases/g_control\n",
+ "```\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd /glade/u/home/$USER/cases/g_overflows\n",
+ "./case.setup\n",
+ "```\n",
+ "\n",
+ "Verify that the run length is 1 year:\n",
+ "``` \n",
+ "./xmlquery STOP_N\n",
+ "./xmlquery STOP_OPTION\n",
+ "```\n",
+ " \n",
+ "Edit the file user_nl_pop and add the lines:\n",
+ "```\n",
+ " overflows_on = .false.\n",
+ " overflows_interactive = .false.\n",
+ "```\n",
+ "\n",
+ "If needed, change job queue \n",
+ "and account number. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=tutorial,PROJECT=UESM0013 --force\n",
+ "```\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "g_overflows. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/g_overflows/ocn/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f19ab341-b76b-462b-9bc9-49d4793ed409",
+ "metadata": {},
+ "source": [
+ "## Test your understanding"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "31d67bb4-3e04-459e-a6ac-866ee9224776",
+ "metadata": {},
+ "source": [
+ "- What variables do you expect to change when you turn off the overflow parameterization?\n",
+ "- What variables show a difference between this experiment and the control difference? How different are they?"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/pop/pop_exercise_3.ipynb b/_sources/notebooks/challenge/pop/pop_exercise_3.ipynb
new file mode 100644
index 000000000..ae7c70372
--- /dev/null
+++ b/_sources/notebooks/challenge/pop/pop_exercise_3.ipynb
@@ -0,0 +1,175 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 3: Modify wind stress"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "33cdee65-f03f-4c72-adfe-b5ce02416d12",
+ "metadata": {},
+ "source": [
+ "Wind stress plays a critical role in driving ocean currents and is a key factor in shaping the overall patterns of large-scale ocean circulation and, consequentialy, the climate. Further details on how wind stress affects the ocean circulation are discussed in [this](https://doi-org.cuucar.idm.oclc.org/10.1006/rwos.2001.0110) manuscript."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Increase zonal wind stress
\n",
+ " \n",
+ "Create a case called **g_windstress** by cloning the control experiment case. \n",
+ " \n",
+ "Verify that the run length is set to **1 year**. \n",
+ "\n",
+ "Modify the subroutine rotate_wind_stress in forcing_coupled.F90 to increase the first (x) component of the wind stress by 25%.\n",
+ "\n",
+ "Build and run the model for one year. \n",
+ "\n",
+ "Compare the simulations using ncview/ncdiff, etc.\n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e2e33a95-e93c-4aca-86d7-1a830cc0562c",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ " \n",
+ "**How do I compile and run?**\n",
+ "\n",
+ "You can compile with the command:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "\n",
+ "You can run with the command:\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ " \n",
+ "**How do I check the lenght of the run?**\n",
+ "\n",
+ "Use ```xmlquery``` to search for the variables that control the run length\n",
+ "\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ "Clone a new case g_windstress from your control experiment with the command:\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_clone --case /glade/u/home/$USER/cases/g_windstress --clone /glade/u/home/$USER/cases/g_control\n",
+ "```\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd /glade/u/home/$USER/cases/g_windstress\n",
+ "./case.setup\n",
+ "```\n",
+ "\n",
+ "Verify that the run length is 1 year:\n",
+ "``` \n",
+ "./xmlquery STOP_N\n",
+ "./xmlquery STOP_OPTION\n",
+ "```\n",
+ " \n",
+ "Copy the forcing_coupled.F90 file from the control case to the ocean SourceMods.\n",
+ "``` \n",
+ "cp /glade/u/home/$USER/code/my_cesm_code/components/pop/source/forcing_coupled.F90 /glade/u/home/$USER/cases/g_windstress/SourceMods/src.pop\n",
+ "``` \n",
+ " \n",
+ "Edit the file forcing_coupled.F90 in the rotate_wind_stress routine after ```SMFT(:,:,1,:)``` is defined:\n",
+ " \n",
+ "```\n",
+ " SMFT(:,:,1,:) = SMFT(:,:,1,:) * 1.25\n",
+ "```\n",
+ "\n",
+ "If needed, change job queue \n",
+ "and account number. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=tutorial,PROJECT=UESM0013 --force\n",
+ "```\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "g_windstress. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$user/archive/g_windstress/ocn/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "286e2e7f-ccea-4c5e-acc5-5f9867341102",
+ "metadata": {},
+ "source": [
+ "## Test your understanding"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "63f2688d-9857-4a49-93bf-2b3117ec0d13",
+ "metadata": {},
+ "source": [
+ "- What are the impacts of increased zonal wind stress? \n",
+ "- Where do you thinkt he impacts would be largest in the ocean?\n",
+ "- How do you think the changes would compare if you increased meridional wind stress?"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "NPL 2023b",
+ "language": "python",
+ "name": "npl-2023b"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/challenge/pop/pop_exercise_4.ipynb b/_sources/notebooks/challenge/pop/pop_exercise_4.ipynb
new file mode 100644
index 000000000..ffa3811df
--- /dev/null
+++ b/_sources/notebooks/challenge/pop/pop_exercise_4.ipynb
@@ -0,0 +1,161 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# 4: Turn on the ecosystem"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "72423b27-32ee-492a-a023-ffd418e2d6ea",
+ "metadata": {},
+ "source": [
+ "You can also explore setting up a similar case but using the ``G1850ECO`` component set. Note how this differs from the previous ``G`` component set we used in Exercise 1. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8f13d092-c9d8-4e47-93b2-caf3cb8335d6",
+ "metadata": {},
+ "source": [
+ "![gcase](../../../images/challenge/gecocase.png)\n",
+ "\n",
+ "* Figure: G1850ECO compset definition.
*"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b90d4773-7ca0-4131-ab07-517608a3e976",
+ "metadata": {},
+ "source": [
+ "\n",
+ "Exercise: Run a control case
\n",
+ " \n",
+ "Create a case called **g_eco1850** using the compset ``G1850ECO`` at ``T62_g37`` resolution. \n",
+ " \n",
+ "Set the run length to **1 year**. \n",
+ "\n",
+ "Build and run the model. Since this is a control case, we want to build it \"out of the box\" without any modifications. \n",
+ "\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e2e33a95-e93c-4aca-86d7-1a830cc0562c",
+ "metadata": {},
+ "source": [
+ " \n",
+ "\n",
+ "\n",
+ " Click here for hints
\n",
+ " \n",
+ "**How do I compile and run?**\n",
+ "\n",
+ "You can compile with the command:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "```\n",
+ "\n",
+ "You can run with the command:\n",
+ "```\n",
+ "./case.submit\n",
+ "```\n",
+ " \n",
+ "**How do I check the lenght of the run?**\n",
+ "\n",
+ "Use ```xmlquery``` to search for the variables that control the run length\n",
+ "\n",
+ " \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f639e182-f48a-431c-a594-9c34323417eb",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ "\n",
+ "Click here for the solution
\n",
+ " \n",
+ " \n",
+ "Create a new case G1850ECO with the command:\n",
+ "```\n",
+ "cd /glade/u/home/$USER/code/my_cesm_code/cime/scripts/\n",
+ "./create_newcase --case /glade/u/home/$USER/cases/G1850ECO --compset G1850ECO --res T62_g37\n",
+ "```\n",
+ "\n",
+ "Case setup:\n",
+ "``` \n",
+ "cd /glade/u/home/$USER/cases/G1850ECO \n",
+ "./case.setup\n",
+ "```\n",
+ " \n",
+ "Change the run length:\n",
+ "``` \n",
+ "./xmlchange STOP_N=1,STOP_OPTION=nyears\n",
+ "```\n",
+ "\n",
+ "If needed, change job queue \n",
+ "and account number. \n",
+ "For instance:\n",
+ "``` \n",
+ "./xmlchange JOB_QUEUE=tutorial,PROJECT=UESM0013 --force\n",
+ "```\n",
+ "\n",
+ "Build and submit:\n",
+ "```\n",
+ "qcmd -- ./case.build\n",
+ "./case.submit\n",
+ "```\n",
+ "\n",
+ "When the run is completed, look into the archive directory for: \n",
+ "G1850ECO. \n",
+ " \n",
+ "(1) Check that your archive directory on derecho (The path will be different on other machines): \n",
+ "```\n",
+ "cd /glade/derecho/scratch/$USER/archive/G1850ECO/ocn/hist\n",
+ "ls \n",
+ "```\n",
+ "\n",
+ " \n",
+ "
\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "dce7f4af-243c-47fd-b4d6-c37832aa80fd",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "NPL 2023b",
+ "language": "python",
+ "name": "npl-2023b"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.12"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/additional.ipynb b/_sources/notebooks/diagnostics/additional/additional.ipynb
new file mode 100644
index 000000000..c47852dd0
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/additional.ipynb
@@ -0,0 +1,54 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# Additional Topics"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "This section provides other information about how to use CESM output including:\n",
+ "- The difference between timeseries and history files\n",
+ "- The Climate Variabilty and Diagnostics Package (CVDP)\n",
+ "- Links to different analysis tools and resources used by CESM developers and users"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/adf.ipynb b/_sources/notebooks/diagnostics/additional/adf.ipynb
new file mode 100644
index 000000000..8056d9f6d
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/adf.ipynb
@@ -0,0 +1,77 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# ADF"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Learning Goals\n",
+ "\n",
+ "- Enter learning goals here."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e73ce5d6-d2b1-4f32-b64f-337a1b02e2d0",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Subsection 1\n",
+ "\n",
+ "Info here"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "815e0869-0518-4cf9-9417-cd9b08965ca1",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Subsection 2\n",
+ "\n",
+ "Info here\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/analysis_tools.ipynb b/_sources/notebooks/diagnostics/additional/analysis_tools.ipynb
new file mode 100644
index 000000000..0e551fc31
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/analysis_tools.ipynb
@@ -0,0 +1,335 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# CESM analysis tools"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "55b5588e-6e74-4bd7-a278-877611c4e87b",
+ "metadata": {},
+ "source": [
+ "We have provided some information about tools the CESM users and developers use for analysis of model simulations below. This list is not comprehensive and is intended to provide you with information to start your searches."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6a4f8751-c312-49b5-a578-604b7f39099a",
+ "metadata": {},
+ "source": [
+ "## Analysis Software"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d31bdb0d-5afe-4b38-b304-34c3179ac6dc",
+ "metadata": {},
+ "source": [
+ "Many data analysis and visualization software packages are freely available for use on CISL-managed resources. These packages include some developed and supported by NCAR and CISL. Some of these resources are open source while others require licences.\n",
+ "\n",
+ "Some of these packages include:\n",
+ "- Numerous python packages\n",
+ "- Interactive Data Language (IDL) \n",
+ "- MATLAB\n",
+ "- NCAR Command Language (NCL)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5bd4569e-601e-47d9-ad24-ae2da7087b7e",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[CISL Data Analysis Website](https://arc.ucar.edu/knowledge_base/70550011)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cc1d09f3-dc55-46b2-912e-deee2147e45d",
+ "metadata": {},
+ "source": [
+ "## Python"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0275672f-9536-4bb6-bfff-4a2eb9bc3630",
+ "metadata": {},
+ "source": [
+ "Python is an open source, general-purpose programming language. \n",
+ "\n",
+ "Python is known for:\n",
+ "- having a wide range of applications and packages available. There is a huge user base and rougly ~1 gazillion online tutorials. \n",
+ "- active development in packages related to the geosciences.\n",
+ "\n",
+ "Python is becoming the dominant language for CESM developers and users, so most of the active development of tools for the CESM project at large are done in this language. We provide more detailed information below about some of the tools available for python users on NCAR computing assets."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8ce79e09-e9e2-4bf0-ad12-4aede3b2d072",
+ "metadata": {},
+ "source": [
+ "### Jupyter Hub"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f82c8f41-42a1-40ce-86bc-5138a9940d08",
+ "metadata": {},
+ "source": [
+ "The JupyterHub deployment that CISL manages allows \"push-button\" access to NCAR's supercomputing resource cluster of nodes used for data analysis and visualization, machine learning, and deep learning.\n",
+ "\n",
+ "JupyterHub gives users the ability to create, save, and share Jupyter Notebooks through the JupyterLab interface and to run interactive, web-based analysis, visualization and compute jobs on derecho and casper.\n",
+ "\n",
+ "Information about getting started with JupyterHub on NCAR computing resources, environments, and documentation is avaiable at the website below."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4b1cc783-53e5-45db-8954-385863b1a778",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[CISL Jupyter Hub Website](https://arc.ucar.edu/knowledge_base/70549913)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "71c323bc-a8e3-406d-a08b-92030f436863",
+ "metadata": {},
+ "source": [
+ "### Earth System Data Science initiative (ESDS)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2d21da94-244c-42c0-970f-8daec7bacf61",
+ "metadata": {},
+ "source": [
+ "ESDS is an NCAR initiative that seeks to foster a collaborative, open, inclusive community for Earth Science data analysis. ESDS promotes deeper collaboration centered on analytics, improving our capacity to deliver impactful, actionable, reproducible science and serve the university community by transforming how geoscientists synthesize and extract information from large, diverse data sets.\n",
+ "\n",
+ "More information, including FAQs and a blog with examples can be found at the website below. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a6813683-9206-492d-b477-2aee1abe4f17",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[ESDS Website](https://ncar.github.io/esds/about/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f4055e68-100a-4ae7-8ee3-47cefcd43d73",
+ "metadata": {},
+ "source": [
+ "### Project Pythia"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f25bcd2d-a7d5-46fe-b5fb-05c14eb6a19b",
+ "metadata": {},
+ "source": [
+ "If you are new to Python and its application to the geosciences, then starting with Project Pythia is a good first step. Project Pythia is the education working group for Pangeo and is an educational resource for the entire geoscience community. Together these initiatives are helping geoscientists make sense of huge volumes of numerical scientific data using tools that facilitate open, reproducible science, and building an inclusive community of practice around these goals."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "552bc788-8a39-4ce9-b7ab-8d8f283614e7",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[Project Pythia Website](https://projectpythia.org/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "efe8a781-b9e6-4ad2-a590-75ceab387fb4",
+ "metadata": {},
+ "source": [
+ "### GeoCAT"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4583d068-c9c4-44ed-8ca3-67352c4fd414",
+ "metadata": {},
+ "source": [
+ "The Geoscience Community Analysis Toolkit (GeoCAT) is a software engineering effort at NCAR. GeoCAT aims to create scalable data analysis and visualization tools for Earth System Science data to serve the geosciences community in the scientific Python ecosystem. GeoCAT tools are built upon the cornerstone technologies in the Pangeo stack such as Xarray, Dask, and Jupyter Notebooks. In addition, some of the functionalities in the GeoCAT stack are inspired/reimplemented from NCL."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "207a8b16-9f0b-447f-b027-1a47ba747d52",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[GeoCAT Website](https://geocat.ucar.edu/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "56d3130e-8f95-4a5e-a797-0b33d538141a",
+ "metadata": {},
+ "source": [
+ "### MetPy"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "76b9dc6d-74d0-4444-96d4-e60db58f8257",
+ "metadata": {},
+ "source": [
+ "MetPy is a collection of tools in Python for reading, visualizing, and performing calculations with weather data. The website below has information about getting started as well as examples and a reference guide."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "53f2c96c-f36a-4080-99b9-e7e2fb1d899d",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[MetPy Website](https://unidata.github.io/MetPy/latest/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7787ed12-bce7-4e49-ac3f-2229de327823",
+ "metadata": {},
+ "source": [
+ "## NCAR Command Language (NCL)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c83303a5-24e5-4758-aeed-0cf3672c665e",
+ "metadata": {},
+ "source": [
+ "NCL is an open source tool developed at NCAR that is free to download and use. It can be run at the command line in interactive mode or as a batch mode. While once a widely used language for CESM developers and users, NCL is now in a maintenence stage and is no longer in development. Much of the active development is now being done with python tools.\n",
+ "\n",
+ "NCL is known for:\n",
+ "- easy input/output use with netCDF, Grib, Grib2, shapefiles, ascii, and binary files. \n",
+ "- good graphics that are very flexible.\n",
+ "- functions tailored to the geosciences community.\n",
+ "- a central website with 1000+ examples. There are also mini language and processing manuals."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7b185d5f-dcdb-4275-99b4-67bf6e5dcc2b",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[NCL Website](https://www.ncl.ucar.edu/get_started.shtml)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7e2bcaf7-f5d0-4d20-b2ed-840841a02972",
+ "metadata": {},
+ "source": [
+ "## Panopoly"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3f774a9f-c4c0-4411-97df-99afe765259f",
+ "metadata": {},
+ "source": [
+ "Panopoly is a graphic user interface (GUI) application that allows the user to quickly view data in a number of file formats. Panopoly is similar to ncview, but it's more powerful. Panopoly works with files in netCDF, HDF, or GRIB format (among others). It also allows the user to perform simple calculations, apply masks, and quickly create spatial or line plots.\n",
+ "\n",
+ "The Panopoly website provies more documentation, including How-To's and demonstration videos."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "483efa17-9787-4bb4-8348-2d2ebadf1dbd",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[Panopoly Website](http://www.giss.nasa.gov/tools/panoply/)\n",
+ "\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "124c346f-9a8a-4589-b74a-e04194a3e473",
+ "metadata": {},
+ "source": [
+ "## Image Magick"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "38488672-40e0-4383-b00c-e3128cdc5304",
+ "metadata": {},
+ "source": [
+ "ImageMagick is a free suite of software that that can be used to display, manipulate, or compare images. It works with a wide range of file types (ps, pdf, png, gif, jpg, etc.). It can also be used to create movies. You can also alter an image at the command line. There are many options available when converting images, and more information can be found at the website below."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "23b653c8-95d6-4a28-a2cc-4d6656c3ceec",
+ "metadata": {},
+ "source": [
+ "\n",
+ "\n",
+ "[Image Magick Website](https://imagemagick.org/index.php)\n",
+ "\n",
+ "
"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/cvdp.ipynb b/_sources/notebooks/diagnostics/additional/cvdp.ipynb
new file mode 100644
index 000000000..1bc11bca9
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/cvdp.ipynb
@@ -0,0 +1,77 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# CVDP"
+ ]
+ },
+
+ {
+ "cell_type": "markdown",
+ "id": "080839b4-514f-4072-88a6-bd514424cf6e",
+ "metadata": {},
+ "source": [
+ "![CVDP Image](https://webext.cgd.ucar.edu/Multi-Case/CVDP_repository/cesm-controls_quadquad/nino34.spatialcomp.indmem.djf1.png)\n",
+ "* Figure: ENSO spatial composite metric from CVDP output showing different generations of CSM/CCSM/CESM.
*"
+ ]
+ },
+
+
+ {
+ "cell_type": "markdown",
+ "id": "8f46aef7-947e-498e-90b1-a4ed6b077f6d",
+ "metadata": {},
+ "source": [
+ "The Climate Variability Diagnostics Package (CVDP) developed by NSF-NCAR's Climate Analysis Section is an automated analysis tool and data repository \n",
+ "for assessing modes of climate variability and trends in models and observations. Time series, spatial patterns and power spectra are displayed \n",
+ "graphically via webpages and saved as NetCDF files for later use. The package can be applied to individual model simulations (style 1) or to \n",
+ "initial condition Large Ensembles (style 2). Both styles provide quantitative metrics comparing models and observations; style 2 also includes \n",
+ "ensemble mean (i.e., forced response) and ensemble spread (i.e., internal variability) diagnostics. Several detrending options are provided, \n",
+ "including linear, quadratic, 30-year high-pass filter and removal of the ensemble mean (in the case of Large Ensembles). All diagnostics and \n",
+ "metrics are fully documented with references to the peer-reviewed literature.\n",
+ "\n",
+ "Examples of CVDP output:\n",
+ "\n",
+ "[CMIP6 Historical+SSP585 Comparison](https://webext.cgd.ucar.edu/Multi-Case/CVDP_repository/cmip6.hist_ssp585_quadquad_1900-2100/)\n",
+ "\n",
+ "[CESM2-Large Ensemble Comparison](https://webext.cgd.ucar.edu/Multi-Case/CVDP_repository/cesm2-lens_quadquad_1850-2100/)\n",
+ "\n",
+ "See the [CVDP Documention page](https://www.cesm.ucar.edu/projects/cvdp/documentation) for instructions on how to run the CVDP.\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "def586e4-6553-48b9-b05c-2308dad9181c",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/large_ensembles.ipynb b/_sources/notebooks/diagnostics/additional/large_ensembles.ipynb
new file mode 100644
index 000000000..6b12beb9b
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/large_ensembles.ipynb
@@ -0,0 +1,77 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# Large Ensembles"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Learning Goals\n",
+ "\n",
+ "- Enter learning goals here."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e73ce5d6-d2b1-4f32-b64f-337a1b02e2d0",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Subsection 1\n",
+ "\n",
+ "Info here"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "815e0869-0518-4cf9-9417-cd9b08965ca1",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Subsection 2\n",
+ "\n",
+ "Info here\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/postprocessing.ipynb b/_sources/notebooks/diagnostics/additional/postprocessing.ipynb
new file mode 100644
index 000000000..9d3605bf2
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/postprocessing.ipynb
@@ -0,0 +1,65 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# Postprocessing data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5bd5c142-f778-4570-8edc-cc760139f30e",
+ "metadata": {},
+ "source": [
+ "A wide range of tools exist for postprocessing and analyzing data with techniques and methods exist. One of the first things you have to decide is how to store your files.\n",
+ "\n",
+ "In the diagnostics notebooks we have have provided examples of how to use both history files and timeseries files, described below."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## History vs. Timeseries files\n",
+ "\n",
+ "When you run the CESM model the default output is history files, or files for a single timestep that include all variables for a given component and time frequency. However, most CESM community experiment data will be provided as timeseries files, or files that are a single variable over many timesteps. It is important you understand how to use both types of files, and for you to know that for some tasks (e.g. debugging) you should be using history files instead of timeseries files. However, it is much more efficient to store timeseries files because the overall size is smaller once the files have been processed into timeseries format.\n",
+ "\n",
+ "The current recommendation is to use the new [CUPiD diagnostics system](https://github.com/NCAR/CUPiD) to convert CESM history files into time series. You can try it yourself on CESM tutorial simulation data by running through the [CUPiD Notebook](../cupid.ipynb) under the diagnostics section.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e961b1bd-a1c8-4e54-bafc-46dcf78454f1",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.13"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/additional/uxarray.ipynb b/_sources/notebooks/diagnostics/additional/uxarray.ipynb
new file mode 100644
index 000000000..b02ecaf3e
--- /dev/null
+++ b/_sources/notebooks/diagnostics/additional/uxarray.ipynb
@@ -0,0 +1,1024 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "f406f992-92bd-4b17-9bd3-b99c5c8abaf3",
+ "metadata": {},
+ "source": [
+ "# UXarray\n",
+ "\n",
+ "UXarray is a Python package that was created to support scalable data analysis and visualization functionality on high-resolution unstructured grids. It is built around the UGRID conventions and provides Xarray-styled functionality for working directly with unstructured grids.\n",
+ "\n",
+ "UXarray is the product of a collaborative effort between Project Raijin, funded by an NSF EarthCube award between NSF NCAR and The Pennsylvania State University, and the SEATS project, funded by DOE."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b6f4905b-cd2a-454e-89cf-ccc585c90247",
+ "metadata": {
+ "tags": []
+ },
+ "source": [
+ "## Learning Goals\n",
+ "\n",
+ "With this notebook, we aim to:\n",
+ "\n",
+ "1. Clarify why UXarray can be useful\n",
+ "2. Provide self-learning resources about UXarray\n",
+ "3. How to access UXarray?\n",
+ "4. Give you a sense of how simple I/O and visualization with UXarray can be"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e73ce5d6-d2b1-4f32-b64f-337a1b02e2d0",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## 1. Why UXarray?\n",
+ "\n",
+ "UXarray can simplify your workflows with unstructured grids because it:\n",
+ "\n",
+ "- Enables significant data analysis and visualization functionality to be executed directly on unstructured grids\n",
+ "\n",
+ "- Adheres to the UGRID specifications for compatibility across a variety of mesh formats\n",
+ "\n",
+ "- Provides a single interface for supporting a variety of unstructured grid formats including UGRID, MPAS, SCRIP, and Exodus\n",
+ "\n",
+ "- Inherits from Xarray, providing simplified data using familiar (Xarray-like) data structures and operations\n",
+ "\n",
+ "- Brings standardization to unstructured mesh support for climate data analysis and visualization\n",
+ "\n",
+ "- Builds on optimized data structures and algorithms for handling large and complex unstructured datasets\n",
+ "\n",
+ "- Supports enhanced interoperability and community collaboration"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "815e0869-0518-4cf9-9417-cd9b08965ca1",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## 2. UXarray Resources\n",
+ "\n",
+ "The following are some UXarray resources that you can leverage to further learn about UXarray, get started with it, see demonstrations of its analysis and visualization capabilities, and learn how to contribute to it:\n",
+ "\n",
+ "### UXarray Documentation Website\n",
+ "\n",
+ "The [UXarray documentation website](https://uxarray.readthedocs.io/en/latest/index.html#) is the to-go place for you to access fundamental information about the tool such as:\n",
+ "\n",
+ "- [Getting started](https://uxarray.readthedocs.io/en/latest/quickstart.html)\n",
+ "- [User guide](https://uxarray.readthedocs.io/en/latest/userguide.html)\n",
+ "- [Gallery](https://uxarray.readthedocs.io/en/latest/gallery.html)\n",
+ "- [API reference](https://uxarray.readthedocs.io/en/latest/api.html)\n",
+ "- [Installation](https://uxarray.readthedocs.io/en/latest/getting-started/installation.html)\n",
+ "- [Contributor's guide](https://uxarray.readthedocs.io/en/latest/contributing.html)\n",
+ "\n",
+ "### UXArray's Project Pythia Cookbook\n",
+ "\n",
+ "This [cookbook](https://projectpythia.org/unstructured-grid-viz-cookbook/README.html) is a comprehensive showcase of workflows & techniques for **visualizing** Unstructured Grids using UXarray with its several notebooks.\n",
+ "\n",
+ "These notebooks can be executed online, without locally setting them up, with the help of the Binder interface provided."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "17a9fff2-4b3f-4983-8c68-7e7dc6bec119",
+ "metadata": {},
+ "source": [
+ "## 3. How to access UXarray?\n",
+ "\n",
+ "In addition to installing UXarray locally by following the instructions in the above [Installation](https://uxarray.readthedocs.io/en/latest/getting-started/installation.html) guide, if you are a user of the NSF NCAR's HPC clusters, you can access UXarray via the **NPL-2024a** (or a newer version) conda environment."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2f77f4fd-4b2d-4502-b953-d1f896e4dc35",
+ "metadata": {},
+ "source": [
+ "## 4. Minimal UXarray visualization\n",
+ "\n",
+ "**BEFORE BEGINNING THIS EXERCISE** - Check that your kernel is minimum `NPL-2024a`. This should be the default kernel, but if it is not, click on that button and select `NPL-2024a` or a newer version."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "63a76f11-4694-4953-818e-61a974196f05",
+ "metadata": {},
+ "source": [
+ "### Data paths"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "216c2e3e-28b8-4ed2-8d64-4be300807ebe",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Set your username here:\n",
+ "username = \"PUT_USER_NAME_HERE\"\n",
+ "\n",
+ "# Here we point to an imaginary location for a ne30x8 directory\n",
+ "monthly_output_path = f\"/glade/derecho/scratch/{username}/ne30x8_dir/\"\n",
+ "\n",
+ "grid_filename = \"ne30x8_np4_SCRIP.nc\"\n",
+ "data_filename = \"ne30x8_220105.nc\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d22db07b-b5b3-4ae7-88f3-6c0d12e154aa",
+ "metadata": {},
+ "source": [
+ "### UXarray Code"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "f837dbd0-487c-4dee-a6fb-b2766c2bc1c8",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import uxarray as ux"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "7b770106-c7ea-4183-8c0e-360613e6eb6f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "uxds_ne30x8 = ux.open_dataset(base_path + grid_filename, base_path + data_filename)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "8605491b-20b2-4b61-b1aa-5163b1ab5542",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "\n",
+ "Original Grid Type: Scrip\n",
+ "Grid Dimensions:\n",
+ " * n_node: 1184802\n",
+ " * n_edge: 2601516\n",
+ " * n_face: 710858\n",
+ " * n_max_face_nodes: 10\n",
+ " * two: 2\n",
+ " * n_nodes_per_face: (710858,)\n",
+ "Grid Coordinates (Spherical):\n",
+ " * node_lon: (1184802,)\n",
+ " * node_lat: (1184802,)\n",
+ " * face_lon: (710858,)\n",
+ " * face_lat: (710858,)\n",
+ "Grid Coordinates (Cartesian):\n",
+ "Grid Connectivity Variables:\n",
+ " * face_node_connectivity: (710858, 10)\n",
+ " * edge_node_connectivity: (2601516, 2)\n",
+ "Grid Descriptor Variables:"
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Let's examine the grid:\n",
+ "uxds_ne30x8.uxgrid"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "0c5a8ac5-78a5-4c1c-88ac-b89748e7667c",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "/glade/u/home/oero/conda-envs/uxarray-dask/lib/python3.12/site-packages/uxarray/grid/geometry.py:95: UserWarning: Converting to a GeoDataFrame with over 1,000,000 faces may take some time.\n",
+ " warnings.warn(\n"
+ ]
+ },
+ {
+ "data": {
+ "application/javascript": [
+ "(function(root) {\n",
+ " function now() {\n",
+ " return new Date();\n",
+ " }\n",
+ "\n",
+ " var force = true;\n",
+ " var py_version = '3.4.2'.replace('rc', '-rc.').replace('.dev', '-dev.');\n",
+ " var reloading = false;\n",
+ " var Bokeh = root.Bokeh;\n",
+ "\n",
+ " if (typeof (root._bokeh_timeout) === \"undefined\" || force) {\n",
+ " root._bokeh_timeout = Date.now() + 5000;\n",
+ " root._bokeh_failed_load = false;\n",
+ " }\n",
+ "\n",
+ " function run_callbacks() {\n",
+ " try {\n",
+ " root._bokeh_onload_callbacks.forEach(function(callback) {\n",
+ " if (callback != null)\n",
+ " callback();\n",
+ " });\n",
+ " } finally {\n",
+ " delete root._bokeh_onload_callbacks;\n",
+ " }\n",
+ " console.debug(\"Bokeh: all callbacks have finished\");\n",
+ " }\n",
+ "\n",
+ " function load_libs(css_urls, js_urls, js_modules, js_exports, callback) {\n",
+ " if (css_urls == null) css_urls = [];\n",
+ " if (js_urls == null) js_urls = [];\n",
+ " if (js_modules == null) js_modules = [];\n",
+ " if (js_exports == null) js_exports = {};\n",
+ "\n",
+ " root._bokeh_onload_callbacks.push(callback);\n",
+ "\n",
+ " if (root._bokeh_is_loading > 0) {\n",
+ " console.debug(\"Bokeh: BokehJS is being loaded, scheduling callback at\", now());\n",
+ " return null;\n",
+ " }\n",
+ " if (js_urls.length === 0 && js_modules.length === 0 && Object.keys(js_exports).length === 0) {\n",
+ " run_callbacks();\n",
+ " return null;\n",
+ " }\n",
+ " if (!reloading) {\n",
+ " console.debug(\"Bokeh: BokehJS not loaded, scheduling load and callback at\", now());\n",
+ " }\n",
+ "\n",
+ " function on_load() {\n",
+ " root._bokeh_is_loading--;\n",
+ " if (root._bokeh_is_loading === 0) {\n",
+ " console.debug(\"Bokeh: all BokehJS libraries/stylesheets loaded\");\n",
+ " run_callbacks()\n",
+ " }\n",
+ " }\n",
+ " window._bokeh_on_load = on_load\n",
+ "\n",
+ " function on_error() {\n",
+ " console.error(\"failed to load \" + url);\n",
+ " }\n",
+ "\n",
+ " var skip = [];\n",
+ " if (window.requirejs) {\n",
+ " window.requirejs.config({'packages': {}, 'paths': {}, 'shim': {}});\n",
+ " root._bokeh_is_loading = css_urls.length + 0;\n",
+ " } else {\n",
+ " root._bokeh_is_loading = css_urls.length + js_urls.length + js_modules.length + Object.keys(js_exports).length;\n",
+ " }\n",
+ "\n",
+ " var existing_stylesheets = []\n",
+ " var links = document.getElementsByTagName('link')\n",
+ " for (var i = 0; i < links.length; i++) {\n",
+ " var link = links[i]\n",
+ " if (link.href != null) {\n",
+ "\texisting_stylesheets.push(link.href)\n",
+ " }\n",
+ " }\n",
+ " for (var i = 0; i < css_urls.length; i++) {\n",
+ " var url = css_urls[i];\n",
+ " if (existing_stylesheets.indexOf(url) !== -1) {\n",
+ "\ton_load()\n",
+ "\tcontinue;\n",
+ " }\n",
+ " const element = document.createElement(\"link\");\n",
+ " element.onload = on_load;\n",
+ " element.onerror = on_error;\n",
+ " element.rel = \"stylesheet\";\n",
+ " element.type = \"text/css\";\n",
+ " element.href = url;\n",
+ " console.debug(\"Bokeh: injecting link tag for BokehJS stylesheet: \", url);\n",
+ " document.body.appendChild(element);\n",
+ " } var existing_scripts = []\n",
+ " var scripts = document.getElementsByTagName('script')\n",
+ " for (var i = 0; i < scripts.length; i++) {\n",
+ " var script = scripts[i]\n",
+ " if (script.src != null) {\n",
+ "\texisting_scripts.push(script.src)\n",
+ " }\n",
+ " }\n",
+ " for (var i = 0; i < js_urls.length; i++) {\n",
+ " var url = js_urls[i];\n",
+ " if (skip.indexOf(url) !== -1 || existing_scripts.indexOf(url) !== -1) {\n",
+ "\tif (!window.requirejs) {\n",
+ "\t on_load();\n",
+ "\t}\n",
+ "\tcontinue;\n",
+ " }\n",
+ " var element = document.createElement('script');\n",
+ " element.onload = on_load;\n",
+ " element.onerror = on_error;\n",
+ " element.async = false;\n",
+ " element.src = url;\n",
+ " console.debug(\"Bokeh: injecting script tag for BokehJS library: \", url);\n",
+ " document.head.appendChild(element);\n",
+ " }\n",
+ " for (var i = 0; i < js_modules.length; i++) {\n",
+ " var url = js_modules[i];\n",
+ " if (skip.indexOf(url) !== -1 || existing_scripts.indexOf(url) !== -1) {\n",
+ "\tif (!window.requirejs) {\n",
+ "\t on_load();\n",
+ "\t}\n",
+ "\tcontinue;\n",
+ " }\n",
+ " var element = document.createElement('script');\n",
+ " element.onload = on_load;\n",
+ " element.onerror = on_error;\n",
+ " element.async = false;\n",
+ " element.src = url;\n",
+ " element.type = \"module\";\n",
+ " console.debug(\"Bokeh: injecting script tag for BokehJS library: \", url);\n",
+ " document.head.appendChild(element);\n",
+ " }\n",
+ " for (const name in js_exports) {\n",
+ " var url = js_exports[name];\n",
+ " if (skip.indexOf(url) >= 0 || root[name] != null) {\n",
+ "\tif (!window.requirejs) {\n",
+ "\t on_load();\n",
+ "\t}\n",
+ "\tcontinue;\n",
+ " }\n",
+ " var element = document.createElement('script');\n",
+ " element.onerror = on_error;\n",
+ " element.async = false;\n",
+ " element.type = \"module\";\n",
+ " console.debug(\"Bokeh: injecting script tag for BokehJS library: \", url);\n",
+ " element.textContent = `\n",
+ " import ${name} from \"${url}\"\n",
+ " window.${name} = ${name}\n",
+ " window._bokeh_on_load()\n",
+ " `\n",
+ " document.head.appendChild(element);\n",
+ " }\n",
+ " if (!js_urls.length && !js_modules.length) {\n",
+ " on_load()\n",
+ " }\n",
+ " };\n",
+ "\n",
+ " function inject_raw_css(css) {\n",
+ " const element = document.createElement(\"style\");\n",
+ " element.appendChild(document.createTextNode(css));\n",
+ " document.body.appendChild(element);\n",
+ " }\n",
+ "\n",
+ " var js_urls = [\"https://cdn.bokeh.org/bokeh/release/bokeh-3.4.2.min.js\", \"https://cdn.bokeh.org/bokeh/release/bokeh-gl-3.4.2.min.js\", \"https://cdn.bokeh.org/bokeh/release/bokeh-widgets-3.4.2.min.js\", \"https://cdn.bokeh.org/bokeh/release/bokeh-tables-3.4.2.min.js\", \"https://cdn.holoviz.org/panel/1.4.4/dist/panel.min.js\"];\n",
+ " var js_modules = [];\n",
+ " var js_exports = {};\n",
+ " var css_urls = [];\n",
+ " var inline_js = [ function(Bokeh) {\n",
+ " Bokeh.set_log_level(\"info\");\n",
+ " },\n",
+ "function(Bokeh) {} // ensure no trailing comma for IE\n",
+ " ];\n",
+ "\n",
+ " function run_inline_js() {\n",
+ " if ((root.Bokeh !== undefined) || (force === true)) {\n",
+ " for (var i = 0; i < inline_js.length; i++) {\n",
+ "\ttry {\n",
+ " inline_js[i].call(root, root.Bokeh);\n",
+ "\t} catch(e) {\n",
+ "\t if (!reloading) {\n",
+ "\t throw e;\n",
+ "\t }\n",
+ "\t}\n",
+ " }\n",
+ " // Cache old bokeh versions\n",
+ " if (Bokeh != undefined && !reloading) {\n",
+ "\tvar NewBokeh = root.Bokeh;\n",
+ "\tif (Bokeh.versions === undefined) {\n",
+ "\t Bokeh.versions = new Map();\n",
+ "\t}\n",
+ "\tif (NewBokeh.version !== Bokeh.version) {\n",
+ "\t Bokeh.versions.set(NewBokeh.version, NewBokeh)\n",
+ "\t}\n",
+ "\troot.Bokeh = Bokeh;\n",
+ " }} else if (Date.now() < root._bokeh_timeout) {\n",
+ " setTimeout(run_inline_js, 100);\n",
+ " } else if (!root._bokeh_failed_load) {\n",
+ " console.log(\"Bokeh: BokehJS failed to load within specified timeout.\");\n",
+ " root._bokeh_failed_load = true;\n",
+ " }\n",
+ " root._bokeh_is_initializing = false\n",
+ " }\n",
+ "\n",
+ " function load_or_wait() {\n",
+ " // Implement a backoff loop that tries to ensure we do not load multiple\n",
+ " // versions of Bokeh and its dependencies at the same time.\n",
+ " // In recent versions we use the root._bokeh_is_initializing flag\n",
+ " // to determine whether there is an ongoing attempt to initialize\n",
+ " // bokeh, however for backward compatibility we also try to ensure\n",
+ " // that we do not start loading a newer (Panel>=1.0 and Bokeh>3) version\n",
+ " // before older versions are fully initialized.\n",
+ " if (root._bokeh_is_initializing && Date.now() > root._bokeh_timeout) {\n",
+ " root._bokeh_is_initializing = false;\n",
+ " root._bokeh_onload_callbacks = undefined;\n",
+ " console.log(\"Bokeh: BokehJS was loaded multiple times but one version failed to initialize.\");\n",
+ " load_or_wait();\n",
+ " } else if (root._bokeh_is_initializing || (typeof root._bokeh_is_initializing === \"undefined\" && root._bokeh_onload_callbacks !== undefined)) {\n",
+ " setTimeout(load_or_wait, 100);\n",
+ " } else {\n",
+ " root._bokeh_is_initializing = true\n",
+ " root._bokeh_onload_callbacks = []\n",
+ " var bokeh_loaded = Bokeh != null && (Bokeh.version === py_version || (Bokeh.versions !== undefined && Bokeh.versions.has(py_version)));\n",
+ " if (!reloading && !bokeh_loaded) {\n",
+ "\troot.Bokeh = undefined;\n",
+ " }\n",
+ " load_libs(css_urls, js_urls, js_modules, js_exports, function() {\n",
+ "\tconsole.debug(\"Bokeh: BokehJS plotting callback run at\", now());\n",
+ "\trun_inline_js();\n",
+ " });\n",
+ " }\n",
+ " }\n",
+ " // Give older versions of the autoload script a head-start to ensure\n",
+ " // they initialize before we start loading newer version.\n",
+ " setTimeout(load_or_wait, 100)\n",
+ "}(window));"
+ ],
+ "application/vnd.holoviews_load.v0+json": "(function(root) {\n function now() {\n return new Date();\n }\n\n var force = true;\n var py_version = '3.4.2'.replace('rc', '-rc.').replace('.dev', '-dev.');\n var reloading = false;\n var Bokeh = root.Bokeh;\n\n if (typeof (root._bokeh_timeout) === \"undefined\" || force) {\n root._bokeh_timeout = Date.now() + 5000;\n root._bokeh_failed_load = false;\n }\n\n function run_callbacks() {\n try {\n root._bokeh_onload_callbacks.forEach(function(callback) {\n if (callback != null)\n callback();\n });\n } finally {\n delete root._bokeh_onload_callbacks;\n }\n console.debug(\"Bokeh: all callbacks have finished\");\n }\n\n function load_libs(css_urls, js_urls, js_modules, js_exports, callback) {\n if (css_urls == null) css_urls = [];\n if (js_urls == null) js_urls = [];\n if (js_modules == null) js_modules = [];\n if (js_exports == null) js_exports = {};\n\n root._bokeh_onload_callbacks.push(callback);\n\n if (root._bokeh_is_loading > 0) {\n console.debug(\"Bokeh: BokehJS is being loaded, scheduling callback at\", now());\n return null;\n }\n if (js_urls.length === 0 && js_modules.length === 0 && Object.keys(js_exports).length === 0) {\n run_callbacks();\n return null;\n }\n if (!reloading) {\n console.debug(\"Bokeh: BokehJS not loaded, scheduling load and callback at\", now());\n }\n\n function on_load() {\n root._bokeh_is_loading--;\n if (root._bokeh_is_loading === 0) {\n console.debug(\"Bokeh: all BokehJS libraries/stylesheets loaded\");\n run_callbacks()\n }\n }\n window._bokeh_on_load = on_load\n\n function on_error() {\n console.error(\"failed to load \" + url);\n }\n\n var skip = [];\n if (window.requirejs) {\n window.requirejs.config({'packages': {}, 'paths': {}, 'shim': {}});\n root._bokeh_is_loading = css_urls.length + 0;\n } else {\n root._bokeh_is_loading = css_urls.length + js_urls.length + js_modules.length + Object.keys(js_exports).length;\n }\n\n var existing_stylesheets = []\n var links = document.getElementsByTagName('link')\n for (var i = 0; i < links.length; i++) {\n var link = links[i]\n if (link.href != null) {\n\texisting_stylesheets.push(link.href)\n }\n }\n for (var i = 0; i < css_urls.length; i++) {\n var url = css_urls[i];\n if (existing_stylesheets.indexOf(url) !== -1) {\n\ton_load()\n\tcontinue;\n }\n const element = document.createElement(\"link\");\n element.onload = on_load;\n element.onerror = on_error;\n element.rel = \"stylesheet\";\n element.type = \"text/css\";\n element.href = url;\n console.debug(\"Bokeh: injecting link tag for BokehJS stylesheet: \", url);\n document.body.appendChild(element);\n } var existing_scripts = []\n var scripts = document.getElementsByTagName('script')\n for (var i = 0; i < scripts.length; i++) {\n var script = scripts[i]\n if (script.src != null) {\n\texisting_scripts.push(script.src)\n }\n }\n for (var i = 0; i < js_urls.length; i++) {\n var url = js_urls[i];\n if (skip.indexOf(url) !== -1 || existing_scripts.indexOf(url) !== -1) {\n\tif (!window.requirejs) {\n\t on_load();\n\t}\n\tcontinue;\n }\n var element = document.createElement('script');\n element.onload = on_load;\n element.onerror = on_error;\n element.async = false;\n element.src = url;\n console.debug(\"Bokeh: injecting script tag for BokehJS library: \", url);\n document.head.appendChild(element);\n }\n for (var i = 0; i < js_modules.length; i++) {\n var url = js_modules[i];\n if (skip.indexOf(url) !== -1 || existing_scripts.indexOf(url) !== -1) {\n\tif (!window.requirejs) {\n\t on_load();\n\t}\n\tcontinue;\n }\n var element = document.createElement('script');\n element.onload = on_load;\n element.onerror = on_error;\n element.async = false;\n element.src = url;\n element.type = \"module\";\n console.debug(\"Bokeh: injecting script tag for BokehJS library: \", url);\n document.head.appendChild(element);\n }\n for (const name in js_exports) {\n var url = js_exports[name];\n if (skip.indexOf(url) >= 0 || root[name] != null) {\n\tif (!window.requirejs) {\n\t on_load();\n\t}\n\tcontinue;\n }\n var element = document.createElement('script');\n element.onerror = on_error;\n element.async = false;\n element.type = \"module\";\n console.debug(\"Bokeh: injecting script tag for BokehJS library: \", url);\n element.textContent = `\n import ${name} from \"${url}\"\n window.${name} = ${name}\n window._bokeh_on_load()\n `\n document.head.appendChild(element);\n }\n if (!js_urls.length && !js_modules.length) {\n on_load()\n }\n };\n\n function inject_raw_css(css) {\n const element = document.createElement(\"style\");\n element.appendChild(document.createTextNode(css));\n document.body.appendChild(element);\n }\n\n var js_urls = [\"https://cdn.bokeh.org/bokeh/release/bokeh-3.4.2.min.js\", \"https://cdn.bokeh.org/bokeh/release/bokeh-gl-3.4.2.min.js\", \"https://cdn.bokeh.org/bokeh/release/bokeh-widgets-3.4.2.min.js\", \"https://cdn.bokeh.org/bokeh/release/bokeh-tables-3.4.2.min.js\", \"https://cdn.holoviz.org/panel/1.4.4/dist/panel.min.js\"];\n var js_modules = [];\n var js_exports = {};\n var css_urls = [];\n var inline_js = [ function(Bokeh) {\n Bokeh.set_log_level(\"info\");\n },\nfunction(Bokeh) {} // ensure no trailing comma for IE\n ];\n\n function run_inline_js() {\n if ((root.Bokeh !== undefined) || (force === true)) {\n for (var i = 0; i < inline_js.length; i++) {\n\ttry {\n inline_js[i].call(root, root.Bokeh);\n\t} catch(e) {\n\t if (!reloading) {\n\t throw e;\n\t }\n\t}\n }\n // Cache old bokeh versions\n if (Bokeh != undefined && !reloading) {\n\tvar NewBokeh = root.Bokeh;\n\tif (Bokeh.versions === undefined) {\n\t Bokeh.versions = new Map();\n\t}\n\tif (NewBokeh.version !== Bokeh.version) {\n\t Bokeh.versions.set(NewBokeh.version, NewBokeh)\n\t}\n\troot.Bokeh = Bokeh;\n }} else if (Date.now() < root._bokeh_timeout) {\n setTimeout(run_inline_js, 100);\n } else if (!root._bokeh_failed_load) {\n console.log(\"Bokeh: BokehJS failed to load within specified timeout.\");\n root._bokeh_failed_load = true;\n }\n root._bokeh_is_initializing = false\n }\n\n function load_or_wait() {\n // Implement a backoff loop that tries to ensure we do not load multiple\n // versions of Bokeh and its dependencies at the same time.\n // In recent versions we use the root._bokeh_is_initializing flag\n // to determine whether there is an ongoing attempt to initialize\n // bokeh, however for backward compatibility we also try to ensure\n // that we do not start loading a newer (Panel>=1.0 and Bokeh>3) version\n // before older versions are fully initialized.\n if (root._bokeh_is_initializing && Date.now() > root._bokeh_timeout) {\n root._bokeh_is_initializing = false;\n root._bokeh_onload_callbacks = undefined;\n console.log(\"Bokeh: BokehJS was loaded multiple times but one version failed to initialize.\");\n load_or_wait();\n } else if (root._bokeh_is_initializing || (typeof root._bokeh_is_initializing === \"undefined\" && root._bokeh_onload_callbacks !== undefined)) {\n setTimeout(load_or_wait, 100);\n } else {\n root._bokeh_is_initializing = true\n root._bokeh_onload_callbacks = []\n var bokeh_loaded = Bokeh != null && (Bokeh.version === py_version || (Bokeh.versions !== undefined && Bokeh.versions.has(py_version)));\n if (!reloading && !bokeh_loaded) {\n\troot.Bokeh = undefined;\n }\n load_libs(css_urls, js_urls, js_modules, js_exports, function() {\n\tconsole.debug(\"Bokeh: BokehJS plotting callback run at\", now());\n\trun_inline_js();\n });\n }\n }\n // Give older versions of the autoload script a head-start to ensure\n // they initialize before we start loading newer version.\n setTimeout(load_or_wait, 100)\n}(window));"
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/javascript": [
+ "\n",
+ "if ((window.PyViz === undefined) || (window.PyViz instanceof HTMLElement)) {\n",
+ " window.PyViz = {comms: {}, comm_status:{}, kernels:{}, receivers: {}, plot_index: []}\n",
+ "}\n",
+ "\n",
+ "\n",
+ " function JupyterCommManager() {\n",
+ " }\n",
+ "\n",
+ " JupyterCommManager.prototype.register_target = function(plot_id, comm_id, msg_handler) {\n",
+ " if (window.comm_manager || ((window.Jupyter !== undefined) && (Jupyter.notebook.kernel != null))) {\n",
+ " var comm_manager = window.comm_manager || Jupyter.notebook.kernel.comm_manager;\n",
+ " comm_manager.register_target(comm_id, function(comm) {\n",
+ " comm.on_msg(msg_handler);\n",
+ " });\n",
+ " } else if ((plot_id in window.PyViz.kernels) && (window.PyViz.kernels[plot_id])) {\n",
+ " window.PyViz.kernels[plot_id].registerCommTarget(comm_id, function(comm) {\n",
+ " comm.onMsg = msg_handler;\n",
+ " });\n",
+ " } else if (typeof google != 'undefined' && google.colab.kernel != null) {\n",
+ " google.colab.kernel.comms.registerTarget(comm_id, (comm) => {\n",
+ " var messages = comm.messages[Symbol.asyncIterator]();\n",
+ " function processIteratorResult(result) {\n",
+ " var message = result.value;\n",
+ " console.log(message)\n",
+ " var content = {data: message.data, comm_id};\n",
+ " var buffers = []\n",
+ " for (var buffer of message.buffers || []) {\n",
+ " buffers.push(new DataView(buffer))\n",
+ " }\n",
+ " var metadata = message.metadata || {};\n",
+ " var msg = {content, buffers, metadata}\n",
+ " msg_handler(msg);\n",
+ " return messages.next().then(processIteratorResult);\n",
+ " }\n",
+ " return messages.next().then(processIteratorResult);\n",
+ " })\n",
+ " }\n",
+ " }\n",
+ "\n",
+ " JupyterCommManager.prototype.get_client_comm = function(plot_id, comm_id, msg_handler) {\n",
+ " if (comm_id in window.PyViz.comms) {\n",
+ " return window.PyViz.comms[comm_id];\n",
+ " } else if (window.comm_manager || ((window.Jupyter !== undefined) && (Jupyter.notebook.kernel != null))) {\n",
+ " var comm_manager = window.comm_manager || Jupyter.notebook.kernel.comm_manager;\n",
+ " var comm = comm_manager.new_comm(comm_id, {}, {}, {}, comm_id);\n",
+ " if (msg_handler) {\n",
+ " comm.on_msg(msg_handler);\n",
+ " }\n",
+ " } else if ((plot_id in window.PyViz.kernels) && (window.PyViz.kernels[plot_id])) {\n",
+ " var comm = window.PyViz.kernels[plot_id].connectToComm(comm_id);\n",
+ " comm.open();\n",
+ " if (msg_handler) {\n",
+ " comm.onMsg = msg_handler;\n",
+ " }\n",
+ " } else if (typeof google != 'undefined' && google.colab.kernel != null) {\n",
+ " var comm_promise = google.colab.kernel.comms.open(comm_id)\n",
+ " comm_promise.then((comm) => {\n",
+ " window.PyViz.comms[comm_id] = comm;\n",
+ " if (msg_handler) {\n",
+ " var messages = comm.messages[Symbol.asyncIterator]();\n",
+ " function processIteratorResult(result) {\n",
+ " var message = result.value;\n",
+ " var content = {data: message.data};\n",
+ " var metadata = message.metadata || {comm_id};\n",
+ " var msg = {content, metadata}\n",
+ " msg_handler(msg);\n",
+ " return messages.next().then(processIteratorResult);\n",
+ " }\n",
+ " return messages.next().then(processIteratorResult);\n",
+ " }\n",
+ " }) \n",
+ " var sendClosure = (data, metadata, buffers, disposeOnDone) => {\n",
+ " return comm_promise.then((comm) => {\n",
+ " comm.send(data, metadata, buffers, disposeOnDone);\n",
+ " });\n",
+ " };\n",
+ " var comm = {\n",
+ " send: sendClosure\n",
+ " };\n",
+ " }\n",
+ " window.PyViz.comms[comm_id] = comm;\n",
+ " return comm;\n",
+ " }\n",
+ " window.PyViz.comm_manager = new JupyterCommManager();\n",
+ " \n",
+ "\n",
+ "\n",
+ "var JS_MIME_TYPE = 'application/javascript';\n",
+ "var HTML_MIME_TYPE = 'text/html';\n",
+ "var EXEC_MIME_TYPE = 'application/vnd.holoviews_exec.v0+json';\n",
+ "var CLASS_NAME = 'output';\n",
+ "\n",
+ "/**\n",
+ " * Render data to the DOM node\n",
+ " */\n",
+ "function render(props, node) {\n",
+ " var div = document.createElement(\"div\");\n",
+ " var script = document.createElement(\"script\");\n",
+ " node.appendChild(div);\n",
+ " node.appendChild(script);\n",
+ "}\n",
+ "\n",
+ "/**\n",
+ " * Handle when a new output is added\n",
+ " */\n",
+ "function handle_add_output(event, handle) {\n",
+ " var output_area = handle.output_area;\n",
+ " var output = handle.output;\n",
+ " if ((output.data == undefined) || (!output.data.hasOwnProperty(EXEC_MIME_TYPE))) {\n",
+ " return\n",
+ " }\n",
+ " var id = output.metadata[EXEC_MIME_TYPE][\"id\"];\n",
+ " var toinsert = output_area.element.find(\".\" + CLASS_NAME.split(' ')[0]);\n",
+ " if (id !== undefined) {\n",
+ " var nchildren = toinsert.length;\n",
+ " var html_node = toinsert[nchildren-1].children[0];\n",
+ " html_node.innerHTML = output.data[HTML_MIME_TYPE];\n",
+ " var scripts = [];\n",
+ " var nodelist = html_node.querySelectorAll(\"script\");\n",
+ " for (var i in nodelist) {\n",
+ " if (nodelist.hasOwnProperty(i)) {\n",
+ " scripts.push(nodelist[i])\n",
+ " }\n",
+ " }\n",
+ "\n",
+ " scripts.forEach( function (oldScript) {\n",
+ " var newScript = document.createElement(\"script\");\n",
+ " var attrs = [];\n",
+ " var nodemap = oldScript.attributes;\n",
+ " for (var j in nodemap) {\n",
+ " if (nodemap.hasOwnProperty(j)) {\n",
+ " attrs.push(nodemap[j])\n",
+ " }\n",
+ " }\n",
+ " attrs.forEach(function(attr) { newScript.setAttribute(attr.name, attr.value) });\n",
+ " newScript.appendChild(document.createTextNode(oldScript.innerHTML));\n",
+ " oldScript.parentNode.replaceChild(newScript, oldScript);\n",
+ " });\n",
+ " if (JS_MIME_TYPE in output.data) {\n",
+ " toinsert[nchildren-1].children[1].textContent = output.data[JS_MIME_TYPE];\n",
+ " }\n",
+ " output_area._hv_plot_id = id;\n",
+ " if ((window.Bokeh !== undefined) && (id in Bokeh.index)) {\n",
+ " window.PyViz.plot_index[id] = Bokeh.index[id];\n",
+ " } else {\n",
+ " window.PyViz.plot_index[id] = null;\n",
+ " }\n",
+ " } else if (output.metadata[EXEC_MIME_TYPE][\"server_id\"] !== undefined) {\n",
+ " var bk_div = document.createElement(\"div\");\n",
+ " bk_div.innerHTML = output.data[HTML_MIME_TYPE];\n",
+ " var script_attrs = bk_div.children[0].attributes;\n",
+ " for (var i = 0; i < script_attrs.length; i++) {\n",
+ " toinsert[toinsert.length - 1].childNodes[1].setAttribute(script_attrs[i].name, script_attrs[i].value);\n",
+ " }\n",
+ " // store reference to server id on output_area\n",
+ " output_area._bokeh_server_id = output.metadata[EXEC_MIME_TYPE][\"server_id\"];\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "/**\n",
+ " * Handle when an output is cleared or removed\n",
+ " */\n",
+ "function handle_clear_output(event, handle) {\n",
+ " var id = handle.cell.output_area._hv_plot_id;\n",
+ " var server_id = handle.cell.output_area._bokeh_server_id;\n",
+ " if (((id === undefined) || !(id in PyViz.plot_index)) && (server_id !== undefined)) { return; }\n",
+ " var comm = window.PyViz.comm_manager.get_client_comm(\"hv-extension-comm\", \"hv-extension-comm\", function () {});\n",
+ " if (server_id !== null) {\n",
+ " comm.send({event_type: 'server_delete', 'id': server_id});\n",
+ " return;\n",
+ " } else if (comm !== null) {\n",
+ " comm.send({event_type: 'delete', 'id': id});\n",
+ " }\n",
+ " delete PyViz.plot_index[id];\n",
+ " if ((window.Bokeh !== undefined) & (id in window.Bokeh.index)) {\n",
+ " var doc = window.Bokeh.index[id].model.document\n",
+ " doc.clear();\n",
+ " const i = window.Bokeh.documents.indexOf(doc);\n",
+ " if (i > -1) {\n",
+ " window.Bokeh.documents.splice(i, 1);\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "/**\n",
+ " * Handle kernel restart event\n",
+ " */\n",
+ "function handle_kernel_cleanup(event, handle) {\n",
+ " delete PyViz.comms[\"hv-extension-comm\"];\n",
+ " window.PyViz.plot_index = {}\n",
+ "}\n",
+ "\n",
+ "/**\n",
+ " * Handle update_display_data messages\n",
+ " */\n",
+ "function handle_update_output(event, handle) {\n",
+ " handle_clear_output(event, {cell: {output_area: handle.output_area}})\n",
+ " handle_add_output(event, handle)\n",
+ "}\n",
+ "\n",
+ "function register_renderer(events, OutputArea) {\n",
+ " function append_mime(data, metadata, element) {\n",
+ " // create a DOM node to render to\n",
+ " var toinsert = this.create_output_subarea(\n",
+ " metadata,\n",
+ " CLASS_NAME,\n",
+ " EXEC_MIME_TYPE\n",
+ " );\n",
+ " this.keyboard_manager.register_events(toinsert);\n",
+ " // Render to node\n",
+ " var props = {data: data, metadata: metadata[EXEC_MIME_TYPE]};\n",
+ " render(props, toinsert[0]);\n",
+ " element.append(toinsert);\n",
+ " return toinsert\n",
+ " }\n",
+ "\n",
+ " events.on('output_added.OutputArea', handle_add_output);\n",
+ " events.on('output_updated.OutputArea', handle_update_output);\n",
+ " events.on('clear_output.CodeCell', handle_clear_output);\n",
+ " events.on('delete.Cell', handle_clear_output);\n",
+ " events.on('kernel_ready.Kernel', handle_kernel_cleanup);\n",
+ "\n",
+ " OutputArea.prototype.register_mime_type(EXEC_MIME_TYPE, append_mime, {\n",
+ " safe: true,\n",
+ " index: 0\n",
+ " });\n",
+ "}\n",
+ "\n",
+ "if (window.Jupyter !== undefined) {\n",
+ " try {\n",
+ " var events = require('base/js/events');\n",
+ " var OutputArea = require('notebook/js/outputarea').OutputArea;\n",
+ " if (OutputArea.prototype.mime_types().indexOf(EXEC_MIME_TYPE) == -1) {\n",
+ " register_renderer(events, OutputArea);\n",
+ " }\n",
+ " } catch(err) {\n",
+ " }\n",
+ "}\n"
+ ],
+ "application/vnd.holoviews_load.v0+json": "\nif ((window.PyViz === undefined) || (window.PyViz instanceof HTMLElement)) {\n window.PyViz = {comms: {}, comm_status:{}, kernels:{}, receivers: {}, plot_index: []}\n}\n\n\n function JupyterCommManager() {\n }\n\n JupyterCommManager.prototype.register_target = function(plot_id, comm_id, msg_handler) {\n if (window.comm_manager || ((window.Jupyter !== undefined) && (Jupyter.notebook.kernel != null))) {\n var comm_manager = window.comm_manager || Jupyter.notebook.kernel.comm_manager;\n comm_manager.register_target(comm_id, function(comm) {\n comm.on_msg(msg_handler);\n });\n } else if ((plot_id in window.PyViz.kernels) && (window.PyViz.kernels[plot_id])) {\n window.PyViz.kernels[plot_id].registerCommTarget(comm_id, function(comm) {\n comm.onMsg = msg_handler;\n });\n } else if (typeof google != 'undefined' && google.colab.kernel != null) {\n google.colab.kernel.comms.registerTarget(comm_id, (comm) => {\n var messages = comm.messages[Symbol.asyncIterator]();\n function processIteratorResult(result) {\n var message = result.value;\n console.log(message)\n var content = {data: message.data, comm_id};\n var buffers = []\n for (var buffer of message.buffers || []) {\n buffers.push(new DataView(buffer))\n }\n var metadata = message.metadata || {};\n var msg = {content, buffers, metadata}\n msg_handler(msg);\n return messages.next().then(processIteratorResult);\n }\n return messages.next().then(processIteratorResult);\n })\n }\n }\n\n JupyterCommManager.prototype.get_client_comm = function(plot_id, comm_id, msg_handler) {\n if (comm_id in window.PyViz.comms) {\n return window.PyViz.comms[comm_id];\n } else if (window.comm_manager || ((window.Jupyter !== undefined) && (Jupyter.notebook.kernel != null))) {\n var comm_manager = window.comm_manager || Jupyter.notebook.kernel.comm_manager;\n var comm = comm_manager.new_comm(comm_id, {}, {}, {}, comm_id);\n if (msg_handler) {\n comm.on_msg(msg_handler);\n }\n } else if ((plot_id in window.PyViz.kernels) && (window.PyViz.kernels[plot_id])) {\n var comm = window.PyViz.kernels[plot_id].connectToComm(comm_id);\n comm.open();\n if (msg_handler) {\n comm.onMsg = msg_handler;\n }\n } else if (typeof google != 'undefined' && google.colab.kernel != null) {\n var comm_promise = google.colab.kernel.comms.open(comm_id)\n comm_promise.then((comm) => {\n window.PyViz.comms[comm_id] = comm;\n if (msg_handler) {\n var messages = comm.messages[Symbol.asyncIterator]();\n function processIteratorResult(result) {\n var message = result.value;\n var content = {data: message.data};\n var metadata = message.metadata || {comm_id};\n var msg = {content, metadata}\n msg_handler(msg);\n return messages.next().then(processIteratorResult);\n }\n return messages.next().then(processIteratorResult);\n }\n }) \n var sendClosure = (data, metadata, buffers, disposeOnDone) => {\n return comm_promise.then((comm) => {\n comm.send(data, metadata, buffers, disposeOnDone);\n });\n };\n var comm = {\n send: sendClosure\n };\n }\n window.PyViz.comms[comm_id] = comm;\n return comm;\n }\n window.PyViz.comm_manager = new JupyterCommManager();\n \n\n\nvar JS_MIME_TYPE = 'application/javascript';\nvar HTML_MIME_TYPE = 'text/html';\nvar EXEC_MIME_TYPE = 'application/vnd.holoviews_exec.v0+json';\nvar CLASS_NAME = 'output';\n\n/**\n * Render data to the DOM node\n */\nfunction render(props, node) {\n var div = document.createElement(\"div\");\n var script = document.createElement(\"script\");\n node.appendChild(div);\n node.appendChild(script);\n}\n\n/**\n * Handle when a new output is added\n */\nfunction handle_add_output(event, handle) {\n var output_area = handle.output_area;\n var output = handle.output;\n if ((output.data == undefined) || (!output.data.hasOwnProperty(EXEC_MIME_TYPE))) {\n return\n }\n var id = output.metadata[EXEC_MIME_TYPE][\"id\"];\n var toinsert = output_area.element.find(\".\" + CLASS_NAME.split(' ')[0]);\n if (id !== undefined) {\n var nchildren = toinsert.length;\n var html_node = toinsert[nchildren-1].children[0];\n html_node.innerHTML = output.data[HTML_MIME_TYPE];\n var scripts = [];\n var nodelist = html_node.querySelectorAll(\"script\");\n for (var i in nodelist) {\n if (nodelist.hasOwnProperty(i)) {\n scripts.push(nodelist[i])\n }\n }\n\n scripts.forEach( function (oldScript) {\n var newScript = document.createElement(\"script\");\n var attrs = [];\n var nodemap = oldScript.attributes;\n for (var j in nodemap) {\n if (nodemap.hasOwnProperty(j)) {\n attrs.push(nodemap[j])\n }\n }\n attrs.forEach(function(attr) { newScript.setAttribute(attr.name, attr.value) });\n newScript.appendChild(document.createTextNode(oldScript.innerHTML));\n oldScript.parentNode.replaceChild(newScript, oldScript);\n });\n if (JS_MIME_TYPE in output.data) {\n toinsert[nchildren-1].children[1].textContent = output.data[JS_MIME_TYPE];\n }\n output_area._hv_plot_id = id;\n if ((window.Bokeh !== undefined) && (id in Bokeh.index)) {\n window.PyViz.plot_index[id] = Bokeh.index[id];\n } else {\n window.PyViz.plot_index[id] = null;\n }\n } else if (output.metadata[EXEC_MIME_TYPE][\"server_id\"] !== undefined) {\n var bk_div = document.createElement(\"div\");\n bk_div.innerHTML = output.data[HTML_MIME_TYPE];\n var script_attrs = bk_div.children[0].attributes;\n for (var i = 0; i < script_attrs.length; i++) {\n toinsert[toinsert.length - 1].childNodes[1].setAttribute(script_attrs[i].name, script_attrs[i].value);\n }\n // store reference to server id on output_area\n output_area._bokeh_server_id = output.metadata[EXEC_MIME_TYPE][\"server_id\"];\n }\n}\n\n/**\n * Handle when an output is cleared or removed\n */\nfunction handle_clear_output(event, handle) {\n var id = handle.cell.output_area._hv_plot_id;\n var server_id = handle.cell.output_area._bokeh_server_id;\n if (((id === undefined) || !(id in PyViz.plot_index)) && (server_id !== undefined)) { return; }\n var comm = window.PyViz.comm_manager.get_client_comm(\"hv-extension-comm\", \"hv-extension-comm\", function () {});\n if (server_id !== null) {\n comm.send({event_type: 'server_delete', 'id': server_id});\n return;\n } else if (comm !== null) {\n comm.send({event_type: 'delete', 'id': id});\n }\n delete PyViz.plot_index[id];\n if ((window.Bokeh !== undefined) & (id in window.Bokeh.index)) {\n var doc = window.Bokeh.index[id].model.document\n doc.clear();\n const i = window.Bokeh.documents.indexOf(doc);\n if (i > -1) {\n window.Bokeh.documents.splice(i, 1);\n }\n }\n}\n\n/**\n * Handle kernel restart event\n */\nfunction handle_kernel_cleanup(event, handle) {\n delete PyViz.comms[\"hv-extension-comm\"];\n window.PyViz.plot_index = {}\n}\n\n/**\n * Handle update_display_data messages\n */\nfunction handle_update_output(event, handle) {\n handle_clear_output(event, {cell: {output_area: handle.output_area}})\n handle_add_output(event, handle)\n}\n\nfunction register_renderer(events, OutputArea) {\n function append_mime(data, metadata, element) {\n // create a DOM node to render to\n var toinsert = this.create_output_subarea(\n metadata,\n CLASS_NAME,\n EXEC_MIME_TYPE\n );\n this.keyboard_manager.register_events(toinsert);\n // Render to node\n var props = {data: data, metadata: metadata[EXEC_MIME_TYPE]};\n render(props, toinsert[0]);\n element.append(toinsert);\n return toinsert\n }\n\n events.on('output_added.OutputArea', handle_add_output);\n events.on('output_updated.OutputArea', handle_update_output);\n events.on('clear_output.CodeCell', handle_clear_output);\n events.on('delete.Cell', handle_clear_output);\n events.on('kernel_ready.Kernel', handle_kernel_cleanup);\n\n OutputArea.prototype.register_mime_type(EXEC_MIME_TYPE, append_mime, {\n safe: true,\n index: 0\n });\n}\n\nif (window.Jupyter !== undefined) {\n try {\n var events = require('base/js/events');\n var OutputArea = require('notebook/js/outputarea').OutputArea;\n if (OutputArea.prototype.mime_types().indexOf(EXEC_MIME_TYPE) == -1) {\n register_renderer(events, OutputArea);\n }\n } catch(err) {\n }\n}\n"
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.holoviews_exec.v0+json": "",
+ "text/html": [
+ "\n",
+ ""
+ ]
+ },
+ "metadata": {
+ "application/vnd.holoviews_exec.v0+json": {
+ "id": "p1002"
+ }
+ },
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ "\n",
+ "
\n",
+ "\n",
+ "\n",
+ "
\n",
+ " \n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "
\n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {},
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.holoviews_exec.v0+json": "",
+ "text/html": [
+ "\n",
+ ""
+ ],
+ "text/plain": [
+ ":DynamicMap []\n",
+ " :Image [x,y] (x_y soilw)"
+ ]
+ },
+ "execution_count": 5,
+ "metadata": {
+ "application/vnd.holoviews_exec.v0+json": {
+ "id": "p1004"
+ }
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Visualization\n",
+ "clim = (uxds_ne30x8['soilw'][0].values.min(), uxds_ne30x8['soilw'][0].values.max()) # colorbar limits\n",
+ "uxds_ne30x8['soilw'][0].plot.rasterize(method='polygon', dynamic=True, clim=clim)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python [conda env:uxarray-dask]",
+ "language": "python",
+ "name": "conda-env-uxarray-dask-py"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/_sources/notebooks/diagnostics/cam/advanced_cam.ipynb b/_sources/notebooks/diagnostics/cam/advanced_cam.ipynb
new file mode 100644
index 000000000..9838c6ed7
--- /dev/null
+++ b/_sources/notebooks/diagnostics/cam/advanced_cam.ipynb
@@ -0,0 +1,261 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Advanced Plotting"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "**BEFORE BEGINNING THIS EXERCISE** - Check that your kernel (upper right corner, above) is `NPL 2023b`. This should be the default kernel, but if it is not, click on that button and select `NPL 2023b`."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "This activity was developed primarily by Cecile Hannay and Jesse Nusbaumer."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "_______________\n",
+ "\n",
+ "## Exercise 1: CAM-SE output analysis\n",
+ "\n",
+ "Examples of simple analysis and plotting that can be done with CAM-SE output on the native cubed-sphere grid."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pathlib import Path\n",
+ "import xarray as xr\n",
+ "import numpy as np\n",
+ "import matplotlib.pyplot as plt\n",
+ "import cartopy.crs as ccrs"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def make_map(data, lon, lat,):\n",
+ " \"\"\"This function plots data on a Mollweide projection map.\n",
+ "\n",
+ " The data is transformed to the projection using Cartopy's `transform_points` method.\n",
+ "\n",
+ " The plot is made by triangulation of the points, producing output very similar to `pcolormesh`,\n",
+ " but with triangles instead of rectangles used to make the image.\n",
+ " \"\"\"\n",
+ " dataproj = ccrs.PlateCarree() # assumes data is lat/lon\n",
+ " plotproj = ccrs.Mollweide() # output projection \n",
+ " # set up figure / axes object, set to be global, add coastlines\n",
+ " fig, ax = plt.subplots(figsize=(6,3), subplot_kw={'projection':plotproj})\n",
+ " ax.set_global()\n",
+ " ax.coastlines(linewidth=0.2)\n",
+ " # this figures out the transformation between (lon,lat) and the specified projection\n",
+ " tcoords = plotproj.transform_points(dataproj, lon.values, lat.values) # working with the projection\n",
+ " xi=tcoords[:,0] != np.inf # there can be bad points set to infinity, but we'll ignore them\n",
+ " assert xi.shape[0] == tcoords.shape[0], f\"Something wrong with shapes should be the same: {xi.shape = }, {tcoords.shape = }\"\n",
+ " tc=tcoords[xi,:]\n",
+ " datai=data.values[xi] # convert to numpy array, then subset\n",
+ " # Use tripcolor --> triangluates the data to make the plot\n",
+ " # rasterized=True reduces the file size (necessary for high-resolution for reasonable file size)\n",
+ " # keep output as \"img\" to make specifying colorbar easy\n",
+ " img = ax.tripcolor(tc[:,0],tc[:,1], datai, shading='gouraud', rasterized=True)\n",
+ " cbar = fig.colorbar(img, ax=ax, shrink=0.4)\n",
+ " return fig, ax"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Input data\n",
+ "\n",
+ "In the following cell, specify the data source.\n",
+ "\n",
+ "`location_of_hfiles` is a path object that points to the directory where data files should be.\n",
+ "`search_pattern` specifies what pattern to look for inside that directory.\n",
+ "\n",
+ "**SIMPLIFICATION** If you want to just provide a path to a file, simply specify it by commenting (with `#`) the lines above \"# WE need lat and lon\", and replace with:\n",
+ "```\n",
+ "fil = \"/path/to/your/data/file.nc\"\n",
+ "ds = xr.open_dataset(fil)\n",
+ "```\n",
+ "\n",
+ "## Parameters\n",
+ "Specify the name of the variable to be analyzed with `variable_name`.\n",
+ "\n",
+ "To change the units of the variable, specify `scale_factor` and provide the new units string as `units`. Otherwise, just set `scale_factor` and `units`:\n",
+ "\n",
+ "```\n",
+ "scale_factor = 1\n",
+ "units = ds[\"variable_name\"].attrs[\"units\"]\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "location_of_hfiles = Path(\"/glade/campaign/cesm/tutorial/tutorial_2023_archive/cam-se/\")\n",
+ "search_pattern = \"f.cam6_3_112.FMTHIST_v0c.ne30.non-ogw-ubcT-effgw0.7_taubgnd2.5.001.cam.h3.2003-01-01-00000.nc\"\n",
+ "\n",
+ "fils = sorted(location_of_hfiles.glob(search_pattern))\n",
+ "if len(fils) == 1:\n",
+ " ds = xr.open_dataset(fils[0])\n",
+ "else:\n",
+ " print(f\"Just so you konw, there are {len(fils)} files about to be loaded.\")\n",
+ " ds = xr.open_mfdataset(fils)\n",
+ "\n",
+ "# We need lat and lon:\n",
+ "lat = ds['lat']\n",
+ "lon = ds['lon']\n",
+ "\n",
+ "# Choose what variables to plot,\n",
+ "# in this example we are going to combine the\n",
+ "# convective and stratiform precipitation into\n",
+ "# a single, total precipitation variable\n",
+ "convective_precip_name = \"PRECC\"\n",
+ "stratiform_precip_name = \"PRECL\"\n",
+ "\n",
+ "# If needed, select scale factor and new units:\n",
+ "scale_factor = 86400. * 1000. # m/s -> mm/day\n",
+ "units = \"mm/day\"\n",
+ "\n",
+ "cp_data = scale_factor * ds[convective_precip_name]\n",
+ "st_data = scale_factor * ds[stratiform_precip_name]\n",
+ "cp_data.attrs['units'] = units\n",
+ "st_data.attrs['units'] = units\n",
+ "\n",
+ "# Sum the two precip variables to get total precip\n",
+ "data = cp_data + st_data\n",
+ "data"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# temporal averaging\n",
+ "# simplest case, just average over time:\n",
+ "data_avg = data.mean(dim='time')\n",
+ "data_avg"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# Global average\n",
+ "#\n",
+ "data_global_average = data_avg.weighted(ds['area']).mean()\n",
+ "print(f\"The area-weighted average of the time-mean data is: {data_global_average.item()}\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#\n",
+ "# Regional average using a (logical) rectangle\n",
+ "#\n",
+ "west_lon = 110.0\n",
+ "east_lon = 200.0\n",
+ "south_lat = -30.0\n",
+ "north_lat = 30.0\n",
+ "\n",
+ "# To reduce to the region, we need to know which indices of ncol dimension are inside the boundary\n",
+ "\n",
+ "region_inds = np.argwhere(((lat > south_lat)&(lat < north_lat)&(lon>west_lon)&(lon \n",
+ "\n",
+ "Click here for the solution
\n",
+ "\n",
+ "![plot example](../../../images/diagnostics/cam/advanced_plot_1.png)\n",
+ "\n",
+ "* Figure: Plotting solution.
*\n",
+ " \n",
+ " \n",
+ "