diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 63c012ab4c..4604441d0a 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -22,6 +22,11 @@ individual files. The changes are now listed with the most recent at the top. +**January 27 2023 :: Documentation update for porting new models. Tag v10.6.2** + +- Improved 'porting new models to DART' documentation. +- Removed outdated references to previous build system. + **December 21 2022 :: Documentation update for CLM and the DART Tutorial. Tag v10.6.1** - Improved instructions for the CLM-DART tutorial. diff --git a/README.rst b/README.rst index bfa316c518..117ee8966c 100644 --- a/README.rst +++ b/README.rst @@ -246,6 +246,7 @@ References :caption: Run DART with your model guide/advice-for-new-collaborators + guide/instructions-for-porting-a-new-model-to-dart DART build system guide/assimilation-complex-model guide/mpi_intro diff --git a/assimilation_code/location/README b/assimilation_code/location/README index 83177a08df..6503111b0d 100644 --- a/assimilation_code/location/README +++ b/assimilation_code/location/README @@ -13,7 +13,7 @@ needed by the DART algorithms. Each of the different location_mod.f90 files provides the same set of interfaces and defines a 'module location_mod', so by -selecting the proper version in your path_names_xxx file you +selecting the proper version in quickbuild.sh you can compile your model code with the main DART routines. threed_sphere: diff --git a/assimilation_code/location/channel/location_mod.rst b/assimilation_code/location/channel/location_mod.rst index 803fc458d0..a0b991290d 100644 --- a/assimilation_code/location/channel/location_mod.rst +++ b/assimilation_code/location/channel/location_mod.rst @@ -28,8 +28,7 @@ Location-independent code All types of location modules define the same module name ``location_mod``. Therefore, the DART framework and any user code should include a Fortran 90 ``use`` statement of ``location_mod``. The selection of which location module will be -compiled into the program is controlled by which source file name is specified in the ``path_names_xxx`` file, which is -used by the ``mkmf_xxx`` scripts. +compiled into the program is controlled by the LOCATION variable in ``quickbuild.sh``. All types of location modules define the same Fortran 90 derived type ``location_type``. Programs that need to pass location information to subroutines but do not need to interpret the contents can declare, receive, and pass this diff --git a/assimilation_code/location/location_mod.rst b/assimilation_code/location/location_mod.rst index f70536a67f..b819881117 100644 --- a/assimilation_code/location/location_mod.rst +++ b/assimilation_code/location/location_mod.rst @@ -16,7 +16,7 @@ provides code for creating, setting/getting, copying location information (coord specific coordinate information. It also contains distance routines needed by the DART algorithms. Each of the different location_mod.f90 files provides the same set of interfaces and defines a 'module location_mod', so -by selecting the proper version in your path_names_xxx file you can compile your model code with the main DART routines. +by selecting the proper version in quickbuild.sh you can compile your model code with the main DART routines. - :doc:`./threed_sphere/location_mod`: The most frequently used version for real-world 3d models. It uses latitude and longitude for horizontal coordinates, diff --git a/assimilation_code/location/oned/location_mod.rst b/assimilation_code/location/oned/location_mod.rst index c519e34b32..0feb2c1505 100644 --- a/assimilation_code/location/oned/location_mod.rst +++ b/assimilation_code/location/oned/location_mod.rst @@ -15,9 +15,9 @@ between 0 and 1. A type that abstracts the location is provided along with opera compute distances between locations. This is a member of a class of similar location modules that provide the same abstraction for different represenations of physical space. -All possible location modules define the same module name ``location_mod``. Therefore, the DART framework and any user -code should include a Fortran 90 'use' statement of 'location_mod'. The selection of exactly which location module is -compiled is specified by the source file name in the ``path_names_xxx`` file, which is read by the ``mkmf_xxx`` scripts. +All types of location modules define the same module name ``location_mod``. Therefore, the DART framework and any user +code should include a Fortran 90 ``use`` statement of ``location_mod``. The selection of which location module will be +compiled into the program is controlled by the LOCATION variable in ``quickbuild.sh``. The model-specific ``model_mod.f90`` files need to define four ``get_close`` routines, but in most cases they can simply put a ``use`` statement at the top which uses the routines in the locations module, and they do not have to provide any diff --git a/assimilation_code/location/threed_cartesian/location_mod.rst b/assimilation_code/location/threed_cartesian/location_mod.rst index ac6206c91d..94fa37db4a 100644 --- a/assimilation_code/location/threed_cartesian/location_mod.rst +++ b/assimilation_code/location/threed_cartesian/location_mod.rst @@ -26,8 +26,7 @@ Location-independent code All types of location modules define the same module name ``location_mod``. Therefore, the DART framework and any user code should include a Fortran 90 ``use`` statement of ``location_mod``. The selection of which location module will be -compiled into the program is controlled by which source file name is specified in the ``path_names_xxx`` file, which is -used by the ``mkmf_xxx`` scripts. +compiled into the program is controlled by the LOCATION variable in ``quickbuild.sh``. All types of location modules define the same Fortran 90 derived type ``location_type``. Programs that need to pass location information to subroutines but do not need to interpret the contents can declare, receive, and pass this diff --git a/assimilation_code/location/threed_sphere/location_mod.rst b/assimilation_code/location/threed_sphere/location_mod.rst index 1f7aaaf82e..6035a4fd61 100644 --- a/assimilation_code/location/threed_sphere/location_mod.rst +++ b/assimilation_code/location/threed_sphere/location_mod.rst @@ -340,8 +340,7 @@ Location-independent code All types of location modules define the same module name ``location_mod``. Therefore, the DART framework and any user code should include a Fortran 90 ``use`` statement of ``location_mod``. The selection of which location module will be -compiled into the program is controlled by which source file name is specified in the ``path_names_xxx`` file, which is -used by the ``mkmf_xxx`` scripts. +compiled into the program is controlled by the LOCATION variable in ``quickbuild.sh``. All types of location modules define the same Fortran 90 derived type ``location_type``. Programs that need to pass location information to subroutines but do not need to interpret the contents can declare, receive, and pass this diff --git a/assimilation_code/programs/model_mod_check/model_mod_check.rst b/assimilation_code/programs/model_mod_check/model_mod_check.rst index 962f330d53..def785e253 100644 --- a/assimilation_code/programs/model_mod_check/model_mod_check.rst +++ b/assimilation_code/programs/model_mod_check/model_mod_check.rst @@ -279,54 +279,6 @@ below. interp_test_vertcoord = 'VERTISHEIGHT' / -Other modules used ------------------- - -:: - - assimilation_code/location/threed_sphere/location_mod.f90 - assimilation_code/location/utilities/default_location_mod.f90 - assimilation_code/location/utilities/location_io_mod.f90 - assimilation_code/modules/assimilation/adaptive_inflate_mod.f90 - assimilation_code/modules/assimilation/assim_model_mod.f90 - assimilation_code/modules/assimilation/assim_tools_mod.f90 - assimilation_code/modules/assimilation/cov_cutoff_mod.f90 - assimilation_code/modules/assimilation/filter_mod.f90 - assimilation_code/modules/assimilation/obs_model_mod.f90 - assimilation_code/modules/assimilation/quality_control_mod.f90 - assimilation_code/modules/assimilation/reg_factor_mod.f90 - assimilation_code/modules/assimilation/sampling_error_correction_mod.f90 - assimilation_code/modules/assimilation/smoother_mod.f90 - assimilation_code/modules/io/dart_time_io_mod.f90 - assimilation_code/modules/io/direct_netcdf_mod.f90 - assimilation_code/modules/io/io_filenames_mod.f90 - assimilation_code/modules/io/state_structure_mod.f90 - assimilation_code/modules/io/state_vector_io_mod.f90 - assimilation_code/modules/observations/forward_operator_mod.f90 - assimilation_code/modules/observations/obs_kind_mod.f90 - assimilation_code/modules/observations/obs_sequence_mod.f90 - assimilation_code/modules/utilities/distributed_state_mod.f90 - assimilation_code/modules/utilities/ensemble_manager_mod.f90 - assimilation_code/modules/utilities/netcdf_utilities_mod.f90 - assimilation_code/modules/utilities/null_mpi_utilities_mod.f90 - assimilation_code/modules/utilities/null_win_mod.f90 - assimilation_code/modules/utilities/obs_impact_mod.f90 - assimilation_code/modules/utilities/options_mod.f90 - assimilation_code/modules/utilities/parse_args_mod.f90 - assimilation_code/modules/utilities/random_seq_mod.f90 - assimilation_code/modules/utilities/sort_mod.f90 - assimilation_code/modules/utilities/time_manager_mod.f90 - assimilation_code/modules/utilities/types_mod.f90 - assimilation_code/modules/utilities/utilities_mod.f90 - assimilation_code/programs/model_mod_check/model_mod_check.f90 - models/your_model_here/model_mod.f90 - models/model_mod_tools/test_interpolate_threed_sphere.f90 - models/model_mod_tools/model_check_utilities_mod.f90 - models/utilities/default_model_mod.f90 - observations/forward_operators/obs_def_mod.f90 - observations/forward_operators/obs_def_utilities_mod.f90 - -Items highlighted may change based on which model is being tested. Files ----- @@ -341,63 +293,6 @@ Files Usage ----- -Normal circumstances indicate that you are trying to put a new model into DART, so to be able to build and run -``model_mod_check``, you will need to create a ``path_names_model_mod_check`` file with the following contents: - -:: - - assimilation_code/location/threed_sphere/location_mod.f90 - assimilation_code/location/utilities/default_location_mod.f90 - assimilation_code/location/utilities/location_io_mod.f90 - assimilation_code/modules/assimilation/adaptive_inflate_mod.f90 - assimilation_code/modules/assimilation/assim_model_mod.f90 - assimilation_code/modules/assimilation/assim_tools_mod.f90 - assimilation_code/modules/assimilation/cov_cutoff_mod.f90 - assimilation_code/modules/assimilation/filter_mod.f90 - assimilation_code/modules/assimilation/obs_model_mod.f90 - assimilation_code/modules/assimilation/quality_control_mod.f90 - assimilation_code/modules/assimilation/reg_factor_mod.f90 - assimilation_code/modules/assimilation/sampling_error_correction_mod.f90 - assimilation_code/modules/assimilation/smoother_mod.f90 - assimilation_code/modules/io/dart_time_io_mod.f90 - assimilation_code/modules/io/direct_netcdf_mod.f90 - assimilation_code/modules/io/io_filenames_mod.f90 - assimilation_code/modules/io/state_structure_mod.f90 - assimilation_code/modules/io/state_vector_io_mod.f90 - assimilation_code/modules/observations/forward_operator_mod.f90 - assimilation_code/modules/observations/obs_kind_mod.f90 - assimilation_code/modules/observations/obs_sequence_mod.f90 - assimilation_code/modules/utilities/distributed_state_mod.f90 - assimilation_code/modules/utilities/ensemble_manager_mod.f90 - assimilation_code/modules/utilities/netcdf_utilities_mod.f90 - assimilation_code/modules/utilities/null_mpi_utilities_mod.f90 - assimilation_code/modules/utilities/null_win_mod.f90 - assimilation_code/modules/utilities/obs_impact_mod.f90 - assimilation_code/modules/utilities/options_mod.f90 - assimilation_code/modules/utilities/parse_args_mod.f90 - assimilation_code/modules/utilities/random_seq_mod.f90 - assimilation_code/modules/utilities/sort_mod.f90 - assimilation_code/modules/utilities/time_manager_mod.f90 - assimilation_code/modules/utilities/types_mod.f90 - assimilation_code/modules/utilities/utilities_mod.f90 - assimilation_code/programs/model_mod_check/model_mod_check.f90 - models/your_model_here/model_mod.f90 - models/model_mod_tools/test_interpolate_threed_sphere.f90 - models/utilities/default_model_mod.f90 - observations/forward_operators/obs_def_mod.f90 - observations/forward_operators/obs_def_utilities_mod.f90 - -| as well as a ``mkmf_model_mod_check`` script. You should be able to look at any other ``mkmf_xxxx`` script and figure - out what to change. Once they exist: - -.. container:: unix - - :: - - [~/DART/models/yourmodel/work] % csh mkmf_model_mod_check - [~/DART/models/yourmodel/work] % make - [~/DART/models/yourmodel/work] % ./model_mod_check - Unlike other DART components, you are expected to modify ``model_mod_check.f90`` to suit your needs as you develop your ``model_mod``. The code is roughly divided into the following categories: diff --git a/assimilation_code/programs/obs_seq_to_netcdf/obs_seq_to_netcdf.rst b/assimilation_code/programs/obs_seq_to_netcdf/obs_seq_to_netcdf.rst index 42f62dd0ad..4a92bfdd40 100644 --- a/assimilation_code/programs/obs_seq_to_netcdf/obs_seq_to_netcdf.rst +++ b/assimilation_code/programs/obs_seq_to_netcdf/obs_seq_to_netcdf.rst @@ -431,49 +431,35 @@ make sensible plots of the observations. Some important aspects are highlighted. Usage ----- -Obs_seq_to_netcdf -~~~~~~~~~~~~~~~~~ - -| ``obs_seq_to_netcdf`` is built and run in ``/DART/observations/utilities/threed_sphere`` or - ``/DART/observations/utilities/oned`` or in the same way as the other DART components. That directory is intentionally - designed to hold components that are model-insensitive. Essentially, we avoid having to populate every ``model`` - directory with identical ``mkmf_obs_seq_to_netcdf`` and ``path_names_obs_seq_to_netcdf`` files. After the program has - been run, ``/DART/observations/utilities/threed_sphere/``\ ``plot_obs_netcdf.m`` can be run to plot the observations. - Be aware that the ``ObsTypesMetaData`` list is all known observation types and not only the observation types in the - netCDF file. - .. _example-1: -Example -^^^^^^^ - -.. container:: routine +Obs_seq_to_netcdf example - :: +.. code:: text - &schedule_nml - calendar = 'Gregorian', - first_bin_start = 2006, 8, 1, 3, 0, 0 , - first_bin_end = 2006, 8, 1, 9, 0, 0 , - last_bin_end = 2006, 8, 3, 3, 0, 0 , - bin_interval_days = 0, - bin_interval_seconds = 21600, - max_num_bins = 1000, - print_table = .true. - / + &schedule_nml + calendar = 'Gregorian', + first_bin_start = 2006, 8, 1, 3, 0, 0 , + first_bin_end = 2006, 8, 1, 9, 0, 0 , + last_bin_end = 2006, 8, 3, 3, 0, 0 , + bin_interval_days = 0, + bin_interval_seconds = 21600, + max_num_bins = 1000, + print_table = .true. + / - &obs_seq_to_netcdf_nml - obs_sequence_name = '', - obs_sequence_list = 'olist', - append_to_netcdf = .false., - lonlim1 = 0.0, - lonlim2 = 360.0, - latlim1 = -80.0, - latlim2 = 80.0, - verbose = .false. - / + &obs_seq_to_netcdf_nml + obs_sequence_name = '', + obs_sequence_list = 'olist', + append_to_netcdf = .false., + lonlim1 = 0.0, + lonlim2 = 360.0, + latlim1 = -80.0, + latlim2 = 80.0, + verbose = .false. + / - > *cat olist* + > cat olist /users/thoar/temp/obs_0001/obs_seq.final /users/thoar/temp/obs_0002/obs_seq.final /users/thoar/temp/obs_0003/obs_seq.final diff --git a/assimilation_code/programs/obs_sequence_tool/obs_sequence_tool.rst b/assimilation_code/programs/obs_sequence_tool/obs_sequence_tool.rst index 51f0a7f393..81335dca1c 100644 --- a/assimilation_code/programs/obs_sequence_tool/obs_sequence_tool.rst +++ b/assimilation_code/programs/obs_sequence_tool/obs_sequence_tool.rst @@ -1,3 +1,5 @@ +.. _obs sequence tool: + program ``obs_sequence_tool`` ============================= diff --git a/conf.py b/conf.py index 29a230affc..32706c08a4 100644 --- a/conf.py +++ b/conf.py @@ -21,7 +21,7 @@ author = 'Data Assimilation Research Section' # The full version, including alpha/beta/rc tags -release = '10.6.1' +release = '10.6.2' master_doc = 'README' # -- General configuration --------------------------------------------------- diff --git a/guide/advice-for-new-collaborators.rst b/guide/advice-for-new-collaborators.rst index 013396e8f8..aef811989d 100644 --- a/guide/advice-for-new-collaborators.rst +++ b/guide/advice-for-new-collaborators.rst @@ -1,5 +1,7 @@ -Working with collaborators on porting new models -================================================ +.. _Using new models: + +Can I run my model with DART? +============================= The DART team often collaborates with other groups to help write the interface code to a new model. The most efficient way to get started is to meet with @@ -14,7 +16,7 @@ Goals of using DART ------------------- DART is the Data Assimilation Research Testbed. It is a collection of -tools and routines and scripts that allow users to built custom solutions +tools, routines, and scripts that allow users to build custom solutions and explore a variety of DA related efforts. It is not a turnkey system; it must be built before use and is often customized based on needs and goals. @@ -22,17 +24,20 @@ DART is often used for the following types of projects: - Learning about Data Assimilation (DA) - Using DART with an existing model and supported observations -- `Adding a DART interface to a new model`_ +- Using DART with a new model: :ref:`Porting new models` - Using new observations with DART in an existing model - Using both a new model and new observations with DART - Using DART to teach DA +You can view a list of models that are already supported at :ref:`Supported models` +and a list of supported observations at :ref:`programs`. + Everything on this "possible goals" list except adding support for a new model can generally be done by a single user with minimal help from the DART team. Therefore this discussion focuses only on adding a new model to DART. -Should I consider using DART? ------------------------------ +Should I consider using DART with my model? +------------------------------------------- DART is an ensemble-based DA system. It makes multiple runs of a model with slightly different inputs and uses the statistical distribution of the results @@ -312,43 +317,4 @@ of the state variables then doing it on demand is more efficient. The options here are namelist selectable at runtime and the impact on total runtime can be easily measured and compared. -Adding a DART interface to a new model --------------------------------------- - -DART provides a script ``new_model.sh`` which will create the necessary files -for a new model interface. -Enter ``./new_model.sh``, then the desired model name and location module separated -by spaces. This will create the necessary files to get started. - -For example to create a model interface for a model called BOUMME which uses -the 3D sphere location module: - -.. code-block:: text - - cd models - ./new_model.sh BOUMME threed_sphere - -This will create an BOUMME model directory with the following files: - -.. code-block:: text - - BOUMME/ - ├── model_mod.f90 - ├── readme.rst - └── work - ├── input.nml - └── quickbuild.sh - -- ``model_mod.f90`` is where to add the :doc:`required model_mod routines`. -- ``readme.rst`` is a stub to add documenation for your model interface. -- ``quickbuild.sh`` is used to compile DART for your model. - - -Templates are chosen based on location module input. The currently supported -location templates are for 3D and 1D modules, with the possibility for more -in the future. At the moment, ``threed_sphere``, ``threed_cartesian``, and -``oned`` will produce model_mod.f90 code that compile will sucessfully with ``./quickbuild.sh``. -We recommend looking at the existing supported models and reusing code from them if -possible. Models with similar grid types or vertical coordinates are good -candidates. diff --git a/guide/distributed_state.rst b/guide/distributed_state.rst index 8d690a5c91..e4e63bc503 100644 --- a/guide/distributed_state.rst +++ b/guide/distributed_state.rst @@ -87,7 +87,7 @@ of mpi_window mods: - cray_win_mod.f90 - no_cray_win_mod.f90 -| We have these two modules that you can swap in your path_names files because the MPI 2 standard states: +| We have these two modules that you can swap in because the MPI 2 standard states: | Implementors may restrict the use of RMA communication that is synchronized by lock calls to windows in memory allocated by MPI_ALLOC_MEM. | MPI_ALLOC_MEM uses cray pointers, thus we have supplied a window module that uses cray pointers. However, diff --git a/guide/how-to-test-your-model-mod-routines.rst b/guide/how-to-test-your-model-mod-routines.rst index 6417fd0e48..2465eb1fe4 100644 --- a/guide/how-to-test-your-model-mod-routines.rst +++ b/guide/how-to-test-your-model-mod-routines.rst @@ -1,11 +1,14 @@ How to test your model_mod routines =================================== -The program ``model_mod_check.f90`` can be used to test the routines -individually before running them with *filter*. Add a ``mkmf_model_mod_check`` -and ``path_names_model_mod_check`` to your ``DART/models/your_model/work`` -subdirectory. You might find it helpful to consult another model matching your -model type (simple or complex). See the documentation for :doc:`model_mod_check -<../assimilation_code/programs/model_mod_check/model_mod_check>` in -``DART/assimilation_code/programs/model_mod_check`` for more information on the -tests available. +The program ``model_mod_check.f90`` can be used to test model_mod routines +individually before running them with filter. Add ``model_mod_check`` +to the list of programs in ``DART/models/your_model/work/quickbuid.sh`` to +build ``model_mod_check`` with ``quickbuild.sh``. + +For more information on the tests in ``model_mod_check`` see :doc:`model_mod_check +<../assimilation_code/programs/model_mod_check/model_mod_check>` + +For more information on quickbuild.sh see :ref:`DART build system`. + + diff --git a/guide/instructions-for-porting-a-new-model-to-dart.rst b/guide/instructions-for-porting-a-new-model-to-dart.rst new file mode 100644 index 0000000000..c27dbe004a --- /dev/null +++ b/guide/instructions-for-porting-a-new-model-to-dart.rst @@ -0,0 +1,295 @@ +.. _Porting new models: + +Instructions for porting a new model to DART +============================================ +To determine if your model is compatible with DART, see :ref:`Using new models`. + +DART provides a script ``new_model.sh`` which will create the necessary files +for a new model interface. + +Templates are chosen based on location module input. The currently supported +location templates are for 3D and 1D modules, with the possibility for more +in the future. At the moment, ``threed_sphere``, ``threed_cartesian``, and +``oned`` will produce model_mod.f90 code that compile will sucessfully with ``./quickbuild.sh``. + +Enter ``./new_model.sh``, then the desired model name and location module separated +by spaces. This will create the necessary files to get started. + +For example to create a model interface for a model called BOUMME which uses +the 3D sphere location module: + +.. code-block:: text + + cd models + ./new_model.sh BOUMME threed_sphere + +This will create an BOUMME model directory with the following files: + +.. code-block:: text + + BOUMME/ + ├── model_mod.f90 + ├── readme.rst + └── work + ├── input.nml + └── quickbuild.sh + +- ``model_mod.f90`` is where to add the :ref:`Required model_mod routines`. +- ``readme.rst`` is a stub to add documenation for your model interface. +- ``quickbuild.sh`` is used to compile DART for your model. + +Navigate to the work directory and enter ``./quickbuild.sh`` and everything should compile at this point. +Please note that you will need to run ``./quickbuild.sh`` again after making edits to ``model_mod.f90`` to recompile. + +The DAReS team recommends that you look at the existing supported models and reusing code from them if +possible when you write the code required for DART. Models with similar grid types +or vertical coordinates are good candidates. + +There is often a sensible default implementation that can be used for each of these routines as well. For +more information on what the default behavior for each routine is and how to use the default implementations, +see :ref:`Required model_mod routines`. + +The required subroutines are these: + +.. code-block:: text + + public :: get_model_size, & + get_state_meta_data, & + model_interpolate, & + shortest_time_between_assimilations, & + static_init_model, & + init_conditions, & + adv_1step, & + nc_write_model_atts, & + pert_model_copies, & + nc_write_model_vars, & + init_time, & + get_close_obs, & + get_close_state, & + end_model, & + convert_vertical_obs, & + convert_vertical_state, & + read_model_time, & + write_model_time + + +If needed, model_mod can contain additional subroutines that are used +for any model-specific utility programs. No routines other than +these will be called by programs in the DART distribution. + +Edit the model_mod and fill in these routines: + +#. ``static_init_model()`` - make it read in any grid information + and the number of variables that will be in the state vector. + Fill in the model_size variable. Now ``get_model_size()`` and + ``get_model_time_step()`` from the template should be ok as-is. + +#. ``get_state_meta_data()`` - given an index number into the state vector + return the location and kind. + +#. ``model_interpolate()`` - given a location (lon/lat/vert in 3d, x in 1d) + and a state QTY_xxx kind, return the interpolated value the field + has at that location. this is probably one of the routines that + will take the most code to write. + +For now, ignore these routines: + +.. code-block:: text + + nc_write_model_vars() + get_close_obs() + get_close_state() + end_model() + convert_vertical_obs() + convert_vertical_state() + read_model_time() + write_model_time() + +If you have data in a initial condition/restart file, then you +can ignore these routines: + +.. code-block:: text + + shortest_time_between_assimilations() + init_conditions() + +Otherwise, have them return an initial time and an initial default +ensemble state. + +If your model is NOT subroutine callable, you can ignore this routine: + +.. code-block:: text + + adv_1step() + +Otherwise have it call the interface to your model and put the files +necessary to build your model into the models/YOUR_MODEL directory. + +If you want to let filter add gaussian noise to a single state vector +to generate an ensemble, you can ignore this routine: + +.. code-block:: text + + pert_model_copies() + +Otherwise fill in code that does whatever perturbation makes sense +to have an initial ensemble of states. in some cases that means +adding a different range of values to each different field in the +state vector. + +At this point you should have enough code to start testing with +the ``model_mod_check`` program. It is a stand-alone utility +that calls many of the model_mod interface routines and should +be easier to debug than some of the other DART programs. + + +Once you have that program working you should have enough code +to test and run simple experiments. + + +The general flow is: + +#. ``./create_obs_sequence`` - make a file with a single observation in it + +#. ``./perfect_model_obs`` - should interpolate a value for the obs + +#. generate an ensemble of states, or set 'perturb_from_single_instance' to .true. + +#. run ``./filter`` with the single observation + +#. Look at the preassim.nc and analysis.nc files + Diff them with ``ncdiff``: + + .. code-block:: text + + ncdiff analysis.nc preassim.nc Innov.nc + + plot it, with ``ncview`` if possible: + + .. code-block:: text + + ncview Innov.nc + + The difference between the two is the impact of that single observation + see if it's at the right location and if the differences seem reasonable + + +If your model data cannot be output in NetCDF file format, or cannot +be directly converted to NetCDF file format with the ncgen program, +there are 2 additional steps: + +* ``model_to_dart`` - read your native format and output data in NetCDF format + +* ``dart_to_model`` - write the updated data back to the native file format + + +More details on each of these 5 steps follows. There is a more in-depth description of each individual program here: :ref:`DART programs`. + +Running ``model_to_dart`` if needed +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If your model data is not stored in NetCDF file format, a program to +convert your data from the model to NetCDF is needed. It needs to +read your model data in whatever format it uses and create NetCDF +variables with the field names, and appropriate dimensions if these +are multi-dimensional fields (e.g. 2d or 3d). If the data is ASCII, +the generic NetCDF utility ncgen may be helpful. + +Running ``create_obs_sequence`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +You can make a synthetic observation (or a series of them) with this +interactive program and use them for testing. Before running make sure +the observation types you want to use are in the input.nml file in the +&obs_kind_nml section, either in the assimilate or evaluate lists. + +Run the program. Give the total number of obs you want to create +(start with 1). Answer 0 to number of data items and 0 to number of +quality control items. Answer 0 when it says enter -1 to quit. You +will be prompted for an observation number to select what type of +observation you are going to test. + +Give it a location that should be inside your domain, someplace where +you can compute (by hand) what the correct value should be. When it +asks for time, give it a time that is the same as the time on your +model data. + +When it asks for error variance, at this point it doesn't matter. +give it something like 10% of the expected data value. Later on +this is going to matter a lot, but for testing the interpolation of +a single synthetic obs, this will do. + +For an output filename, it suggests 'set_def.out' but in this case +tell it 'obs_seq.in'. + + +Running ``perfect_model_obs`` +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Make sure the NetCDF file with your input data matches the input name +in the input.nml file, the &perfect_model_obs_nml namelist. +Make sure the input obs_sequence is still set to 'obs_seq.in'. +run perfect_model_obs. Something bad will happen, most likely. Fix it. + +Eventually it will run and you will get an 'obs_seq.out' file. For these +tests, make sure &obs_sequence_nml : write_binary_obs_sequence = .false. +in the input.nml file. The sequence files will be short and in ascii. +You can check to see what the interpolated value is. if it's right, congratulations. +If not, debug the interpolation code in the model_mod.f90 file. + + +Using a single input state +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the &filter_nml namelist, set 'perturb_from_single_instance' to .true. +this tells filter that you have not generated N initial conditions, +that you are only going to supply one and it needs to perturb that +one to generate an initial ensemble. Make sure the 'input_state_files' +matches the name of the single state vector file you have. You can +use the 'obs_seq.out' file from the perfect_model run because now +it has data for that observation. Later on you will need to decide +on how to generate a real set of initial states, and then you will +set 'perturb_from_single_instance' back to .false. and +supply N files instead of one. You may need to set the +&ensemble_manager_nml : perturbation_amplitude +down to something smaller than 0.2 for these tests - 0.00001 is a good +first guess for adding small perturbations to a state. + + +Running ``filter`` +~~~~~~~~~~~~~~~~~~ + +Set the ens_size to something small for testing - between 4 and 10 is +usually a good range. Make sure your observation type is in the +'assimilate_these_obs_types' list and not in the evaluate list. +run filter. Find bugs and fix them until the output 'obs_seq.final' +seems to have reasonable values. Running filter will generate +NetCDF diagnostic files. The most useful for diagnosis will +be comparing preassim.nc and analysis.nc. + + +Diagnostics +~~~~~~~~~~~ + +Run 'ncdiff analysis.nc preassim.nc differences.nc' and use +your favorite netcdf plotting tool to see if there are any differences +between the 2 files. For modules using a regular lat/lon grid 'ncview' +is a quick way to scan files. For something on an irregular +grid a more complicated tool will have to be used. If the files are +identical the assimilation didn't do anything. Check to see if there +is a non-zero DART quality control value in the obs_seq.final file. +Check to see if there are errors in the dart_log.out file. Figure out +why there's no change. If there is a difference, it should be at +the location of the observation and extend out from it for a short +distance. If it isn't in the right location, look at your get_state_meta_data() +code. If it doesn't have a reasonable value, look at your model_interpolate() code. + + +Running ``dart_to_model`` if needed +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After you have run filter, the files named in the 'output_state_files' namelist +item will contain the changed values. If your model is reading NetCDF format +it can ingest these directly. If not, an additional step is needed to copy +over the updated values for the next model run. + diff --git a/guide/mpi_intro.rst b/guide/mpi_intro.rst index 5d161ebd7c..7a76ef03e3 100644 --- a/guide/mpi_intro.rst +++ b/guide/mpi_intro.rst @@ -9,25 +9,18 @@ MPI is both a library and run-time system that enables multiple copies of a single program to run in parallel, exchange data, and combine to solve a problem more quickly. -DART does **NOT** require MPI to run; the default build scripts do not need nor -use MPI in any way. However, for larger models with large state vectors and +DART does **NOT** require MPI to run. However, for larger models with large state vectors and large numbers of observations, the data assimilation step will run much faster in parallel, which requires MPI to be installed and used. However, if multiple ensembles of your model fit comfortably (in time and memory space) on a single processor, you need read no further about MPI. -MPI is an open-source standard; there are many implementations of it. If you +MPI is a `standard `_; there are many implementations of MPI. If you have a large single-vendor system it probably comes with an MPI library by default. For a Linux cluster there are generally more variations in what might -be installed; most systems use a version of MPI called MPICH. In smaller -clusters or dual-processor workstations a version of MPI called either LAM-MPI -or OpenMPI might be installed, or can be downloaded and installed by the end -user. +be installed; examples include `OpenMPI `_ and +`MVAPICH2 `_. -.. note:: - - OpenMP is a different parallel system; OpenMPI is a recent effort with a - confusingly similar name. An "MPI program" makes calls to an MPI library, and needs to be compiled with MPI include files and libraries. Generally the MPI installation includes a shell script called ``mpif90`` which adds the flags and libraries appropriate @@ -42,16 +35,17 @@ system documentation or find an example of a successful MPI program compile comm DART use of MPI ~~~~~~~~~~~~~~~ -To run in parallel, only the DART 'filter' program (possibly the companion 'wakeup_filter' program), and the 'GSI2DART' observation converter need to be -compiled with the MPI scripts. All other DART executables should be compiled with a standard F90 compiler and are not -MPI enabled. (And note again that 'filter' can still be built as a single executable like previous releases of DART; -using MPI and running in parallel is simply an additional option.) To build a parallel version of the 'filter' -program, the 'mkmf_filter' command needs to be called with the '-mpi' option to generate a Makefile which compiles -with the MPI scripts instead of the Fortran compiler. +Several DART executables can make use of MPI. To build with MPI, make sure the ``$DART/mkmf/mkmf.template`` has +the correct command for the MPI library you are using. Typically this is mpif90, + +.. code:: + + MPIFC = mpif90 + MPILD = mpif90 + +but may be ``mpiifort`` if you are using the Intel MPI library. ``./quickbuild.sh`` will then build DART using MPI. -Run ``quickbuild.sh`` will build DART with MPI. You will also need to edit the ``$DART/mkmf/mkmf.template`` file to call the proper version -of the MPI compile script if it does not have the default name, is not in a standard location on the system, or needs -additional options set to select between multiple Fortran compilers. To build without mpi, use ``quickbuild.sh nompi``. +If you want to build DART without using MPI, run ``./quickbuild.sh nompi`` MPI programs generally need to be started with a shell script called 'mpirun' or 'mpiexec', but they also interact with any batch control system that might be installed on the cluster or parallel system. Parallel systems with diff --git a/guide/preprocess-program.rst b/guide/preprocess-program.rst index deae8fc9ae..7a4f5e9349 100644 --- a/guide/preprocess-program.rst +++ b/guide/preprocess-program.rst @@ -17,9 +17,7 @@ can integrate along the raypath. Cosmic ray soil moisture sensors have forward operators that require site-specific calibration parameters that are not part of the model and must be included in the observation metadata. -The potential examples are numerous. - -Since each 'observation quantity' may require different amounts of metadata to +Since each observation type may require different amounts of metadata to be read or written, any routine to read or write an observation sequence **must** be compiled with support for those particular observations. This is the rationale for the inclusion of ``preprocess`` in DART. The supported @@ -28,16 +26,17 @@ observations are listed in the ``obs_kind_nml`` namelist in ``input.nml``. For this reason, we strongly recommend that you use the DART routines to read and process DART observation sequence files. + .. important:: - You **must** actually run ``preprocess`` before building any executables. - It is an essential part of DART that enables the same code to interface with - multiple models and observation types. For example, ``preprocess`` allows - DART to assimilate synthetic observations for the Lorenz_63 model and real - radar reflectivities for WRF without needing to specify a set of radar - operators for the Lorenz_63 model. + Preprocess is built and run when you run ``quickbuild.sh`` + +Preprocess is used to insert observation specific code into DART at compile time. +This compile time choice is how DART can assimilate synthetic observations for the +Lorenz_63 model and real radar reflectivities for WRF without needing to specify a set of radar +operators for the Lorenz_63 model. -``preprocess`` combines multiple ``obs_def`` and ``obs_quantity`` modules into one +``preprocess`` combines multiple ``obs_def`` modules into one ``obs_def_mod.f90`` that is then used by the rest of DART. Additionally, a new ``obs_kind_mod.f90`` is built that will provide support for associating the specific observation **TYPES** with corresponding (generic) observation @@ -49,7 +48,7 @@ observations and operators are supported. .. warning:: - If you want to add another ``obs_def`` module, you **must** rerun + If you want to add another ``obs_def`` module or quantity file, you **must** rerun ``preprocess`` and recompile the rest of your project. Example ``preprocess`` namelist @@ -60,8 +59,8 @@ As an example, if a ``preprocess_nml`` namelist in ``input.nml`` looks like: .. code-block:: fortran &preprocess_nml - input_obs_kind_mod_file = '../../../assimilation_code/modules/observations/DEFAULT_obs_kind_mod.F90' - output_obs_kind_mod_file = '../../../assimilation_code/modules/observations/obs_kind_mod.f90' + input_obs_qty_mod_file = '../../../assimilation_code/modules/observations/DEFAULT_obs_kind_mod.F90' + output_obs_qty_mod_file = '../../../assimilation_code/modules/observations/obs_kind_mod.f90' quantity_files = '../../../assimilation_code/modules/observations/atmosphere_quantities_mod.f90', input_obs_def_mod_file = '../../../observations/forward_operators/DEFAULT_obs_def_mod.F90' obs_type_files = '../../../observations/forward_operators/obs_def_gps_mod.f90', @@ -84,30 +83,3 @@ As an example, if a ``preprocess_nml`` namelist in ``input.nml`` looks like: into ``obs_def_mod.f90``. This resulting module can be used by the rest of the project. -Building and running ``preprocess`` ------------------------------------ - -Since ``preprocess`` is an executable, it must be compiled following the -procedure of all DART executables: - -1. The ``DART/build_templates/mkmf.template`` must be correct for your - environment. -2. The ``preprocess_nml`` namelist in ``input.nml`` must be set properly with - the modules you want to use. - -If those two conditions are met, you can build and run ``preprocess`` using -these commands: - -.. code-block:: - - $ csh mkmf_preprocess - $ make - $ ./preprocess - -The first command generates an appropriate ``Makefile`` and the -``input.nml.preprocess_default`` file. The second command results in the -compilation of a series of Fortran90 modules which ultimately produces the -``preprocess`` executable file. The third command actually runs preprocess - -which builds the new ``obs_kind_mod.f90`` and ``obs_def_mod.f90`` source code -files. Once these source code files are created, you can now build the rest of -DART. diff --git a/guide/required-model-mod-routines.rst b/guide/required-model-mod-routines.rst index 8ce7cdef5e..69f86654c7 100644 --- a/guide/required-model-mod-routines.rst +++ b/guide/required-model-mod-routines.rst @@ -1,3 +1,5 @@ +.. _Required model_mod routines: + Required model_mod routines =========================== @@ -74,23 +76,23 @@ advanced externally from DART. | 13. **pert_model_copies()** | *Perturb* a state vector in order to create an ensemble. | ``default_model_mod`` / ``models/utilities`` | Add Gaussian noise with a specified amplitude to | | | | | all parts of the state vector. | +-------------------------------------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------+---------------------------------------------------+ -| 14. **convert_vertical_obs()** | Some 3D models have multiple vertical coordinates (e.g. pressure, height, or | ``location_mod/`` ``assimilation_code/`` ``location/XXX`` | Do no conversion. \ *NOTE*: the particular | -| | model level); this method *converts observations* between different vertical | | sub-directory of ``location`` to use is set in | -| | coordinate systems. | | ``path_names_`` for each DART program. | +| 14. **convert_vertical_obs()** | Some 3D models have multiple vertical coordinates (e.g. pressure, height, or | ``location_mod/`` ``assimilation_code/`` ``location/XXX`` | Do no conversion. | +| | model level); this method *converts observations* between different vertical | | | +| | coordinate systems. | | | +-------------------------------------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------+---------------------------------------------------+ -| 15. **convert_vertical_state()** | Some 3D models have multiple vertical coordinates (e.g. pressure, height, or | ``location_mod/`` ``assimilation_code/`` ``location/XXX`` | Do no conversion. \ *NOTE*: the particular | -| | model level); this method *converts state* between different vertical | | sub-directory of ``location`` to use is set in | -| | coordinate systems. | | ``path_names_`` for each DART program. | +| 15. **convert_vertical_state()** | Some 3D models have multiple vertical coordinates (e.g. pressure, height, or | ``location_mod/`` ``assimilation_code/`` ``location/XXX`` | Do no conversion. | +| | model level); this method *converts state* between different vertical | | | +| | coordinate systems. | | | +-------------------------------------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------+---------------------------------------------------+ | 16. **get_close_obs()** | Calculate *which observations are “close”* to a given location and, | ``location_mod/`` ``assimilation_code/`` ``location/XXX`` | Uses the default behavior for determining | -| | optionally, the distance. This is used for localization to reduce sampling | | distance. \ *NOTE*: the particular sub-directory | -| | error. | | of ``location`` to use is set in | -| | | | ``path_names_`` for each DART program. | +| | optionally, the distance. This is used for localization to reduce sampling | | distance. | +| | error. | | | +| | | | | +-------------------------------------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------+---------------------------------------------------+ | 17. **get_close_state()** | Calculate *which state points are “close”* to a given location and, | ``location_mod/`` ``assimilation_code/`` ``location/XXX`` | Uses the default behavior for determining | -| | optionally, the distance. This is used for localization to reduce sampling | | distance. \ *NOTE*: the particular sub-directory | -| | error. | | of ``location`` to use is set in | -| | | | ``path_names_`` for each DART program. | +| | optionally, the distance. This is used for localization to reduce sampling | | distance. | +| | error. | | | +| | | | | +-------------------------------------------------+--------------------------------------------------------------------------------+-------------------------------------------------------------------------------------+---------------------------------------------------+ | 18. **nc_write_model_vars()** | This method is not currently called, so just use the default routine for now. | ``default_model_mod`` / ``models/utilities`` | Does nothing. | | | This method will be used in a future implementation. | | | diff --git a/models/README.rst b/models/README.rst index 1de281d8b6..6152cbcaca 100644 --- a/models/README.rst +++ b/models/README.rst @@ -1,3 +1,5 @@ +.. _Supported models: + Supported Models ================ @@ -42,265 +44,4 @@ DART supported models: - :doc:`wrf_hydro/readme` - :doc:`wrf/readme` - -Hints for porting a new model to DART: --------------------------------------- - -Copy the contents of the ``DART/models/template`` directory -into a ``DART/models/xxx`` directory for your new model. - -If your model is closer to the simpler examples (e.g. lorenz), -the existing model_mod.f90 is a good place to start. -If your model is a full 3d geophysical one (e.g. like cam, pop, etc) -then rename full_model_mod.f90 to model_mod.f90 and start there. - -Set ``LOCATION`` in quickbuild.sh to ``LOCATION=threed_sphere`` for -full 3d geophysical models. - -Try ``./quickbuild.sh`` and everything should compile at this point. - -The required subroutines are these: - -.. code-block:: text - - public :: get_model_size, & - get_state_meta_data, & - model_interpolate, & - shortest_time_between_assimilations, & - static_init_model, & - init_conditions, & - adv_1step, & - nc_write_model_atts, & - pert_model_copies, & - nc_write_model_vars, & - init_time, & - get_close_obs, & - get_close_state, & - end_model, & - convert_vertical_obs, & - convert_vertical_state, & - read_model_time, & - write_model_time - - -If needed, model_mod can contain additional subroutines that are used -for any model-specific utility programs. No routines other than -these will be called by programs in the DART distribution. - -Edit the model_mod and fill in these routines: - -#. ``static_init_model()`` - make it read in any grid information - and the number of variables that will be in the state vector. - Fill in the model_size variable. Now ``get_model_size()`` and - ``get_model_time_step()`` from the template should be ok as-is. - -#. ``get_state_meta_data()`` - given an index number into the state vector - return the location and kind. - -#. ``model_interpolate()`` - given a location (lon/lat/vert in 3d, x in 1d) - and a state QTY_xxx kind, return the interpolated value the field - has at that location. this is probably one of the routines that - will take the most code to write. - -For now, ignore these routines: - -.. code-block:: text - - nc_write_model_vars() - get_close_obs() - get_close_state() - end_model() - convert_vertical_obs() - convert_vertical_state() - read_model_time() - write_model_time() - -If you have data in a dart initial condition/restart file, then you -can ignore these routines: - -.. code-block:: text - - shortest_time_between_assimilations() - init_conditions() - -Otherwise, have them return an initial time and an initial default -ensemble state. - -If your model is NOT subroutine callable, you can ignore this routine: - -.. code-block:: text - - adv_1step() - -Otherwise have it call the interface to your model and add the files -necessary to build your model to all the `work/path_names_*` files. -Add any needed model source files to a src/ directory. - -If you want to let filter add gaussian noise to a single state vector -to generate an ensemble, you can ignore this routine: - -.. code-block:: text - - pert_model_copies() - -Otherwise fill in code that does whatever perturbation makes sense -to have an initial ensemble of states. in some cases that means -adding a different range of values to each different field in the -state vector. - -At this point you should have enough code to start testing with -the ``model_mod_check`` program. It is a stand-alone utility -that calls many of the model_mod interface routines and should -be easier to debug than some of the other DART programs. - - -Once you have that program working you should have enough code -to test and run simple experiments. - - -The general flow is: - -#. ``./create_obs_sequence`` - make a file with a single observation in it - -#. ``./perfect_model_obs`` - should interpolate a value for the obs - -#. generate an ensemble of states, or set 'perturb_from_single_instance' to .true. - -#. run ``./filter`` with the single observation - -#. Look at the preassim.nc and analysis.nc files - Diff them with ``ncdiff``: - - .. code-block:: text - - ncdiff analysis.nc preassim.nc Innov.nc - - plot it, with ``ncview`` if possible: - - .. code-block:: text - - ncview Innov.nc - - The difference between the two is the impact of that single observation - see if it's at the right location and if the differences seem reasonable - - -If your model data cannot be output in NetCDF file format, or cannot -be directly converted to NetCDF file format with the ncgen program, -there are 2 additional steps: - -* ``model_to_dart`` - read your native format and output data in NetCDF format - -* ``dart_to_model`` - write the updated data back to the native file format - - -More details on each of these 5 steps follows. - -Running ``model_to_dart`` if needed -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If your model data is not stored in NetCDF file format, a program to -convert your data from the model to NetCDF is needed. It needs to -read your model data in whatever format it uses and create NetCDF -variables with the field names, and appropriate dimensions if these -are multi-dimensional fields (e.g. 2d or 3d). If the data is ASCII, -the generic NetCDF utility ncgen may be helpful. - -Running ``create_obs_sequence`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can make a synthetic observation (or a series of them) with this -interactive program and use them for testing. Before running make sure -the observation types you want to use are in the input.nml file in the -&obs_kind_nml section, either in the assimilate or evaluate lists. - -Run the program. Give the total number of obs you want to create -(start with 1). Answer 0 to number of data items and 0 to number of -quality control items. Answer 0 when it says enter -1 to quit. You -will be prompted for an observation number to select what type of -observation you are going to test. - -Give it a location that should be inside your domain, someplace where -you can compute (by hand) what the correct value should be. When it -asks for time, give it a time that is the same as the time on your -model data. - -When it asks for error variance, at this point it doesn't matter. -give it something like 10% of the expected data value. Later on -this is going to matter a lot, but for testing the interpolation of -a single synthetic obs, this will do. - -For an output filename, it suggests 'set_def.out' but in this case -tell it 'obs_seq.in'. - - -Running ``perfect_model_obs`` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Make sure the NetCDF file with your input data matches the input name -in the input.nml file, the &perfect_model_obs_nml namelist. -Make sure the input obs_sequence is still set to 'obs_seq.in'. -run perfect_model_obs. Something bad will happen, most likely. Fix it. - -Eventually it will run and you will get an 'obs_seq.out' file. For these -tests, make sure &obs_sequence_nml : write_binary_obs_sequence = .false. -in the input.nml file. The sequence files will be short and in ascii. -You can check to see what the interpolated value is. if it's right, congratulations. -If not, debug the interpolation code in the model_mod.f90 file. - - -Using a single input state -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In the &filter_nml namelist, set 'perturb_from_single_instance' to .true. -this tells filter that you have not generated N initial conditions, -that you are only going to supply one and it needs to perturb that -one to generate an initial ensemble. Make sure the 'input_state_files' -matches the name of the single state vector file you have. You can -use the 'obs_seq.out' file from the perfect_model run because now -it has data for that observation. Later on you will need to decide -on how to generate a real set of initial states, and then you will -set 'perturb_from_single_instance' back to .false. and -supply N files instead of one. You may need to set the -&ensemble_manager_nml : perturbation_amplitude -down to something smaller than 0.2 for these tests - 0.00001 is a good -first guess for adding small perturbations to a state. - - -Running ``filter`` -~~~~~~~~~~~~~~~~~~ - -Set the ens_size to something small for testing - between 4 and 10 is -usually a good range. Make sure your observation type is in the -'assimilate_these_obs_types' list and not in the evaluate list. -run filter. Find bugs and fix them until the output 'obs_seq.final' -seems to have reasonable values. Running filter will generate -NetCDF diagnostic files. The most useful for diagnosis will -be comparing preassim.nc and analysis.nc. - - -Diagnostics -~~~~~~~~~~~ - -Run 'ncdiff analysis.nc preassim.nc differences.nc' and use -your favorite netcdf plotting tool to see if there are any differences -between the 2 files. For modules using a regular lat/lon grid 'ncview' -is a quick way to scan files. For something on an irregular -grid a more complicated tool will have to be used. If the files are -identical the assimilation didn't do anything. Check to see if there -is a non-zero DART quality control value in the obs_seq.final file. -Check to see if there are errors in the dart_log.out file. Figure out -why there's no change. If there is a difference, it should be at -the location of the observation and extend out from it for a short -distance. If it isn't in the right location, look at your get_state_meta_data() -code. If it doesn't have a reasonable value, look at your model_interpolate() code. - - -Running ``dart_to_model`` if needed -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -After you have run filter, the files named in the 'output_state_files' namelist -item will contain the changed values. If your model is reading NetCDF format -it can ingest these directly. If not, an additional step is needed to copy -over the updated values for the next model run. - + If you are interested in creating a DART interface for a new model, see :ref:`Using new models` and :ref:`Porting new models`. diff --git a/models/dynamo/README b/models/dynamo/README index 2710f6bbf9..9570f5cf8a 100644 --- a/models/dynamo/README +++ b/models/dynamo/README @@ -69,25 +69,6 @@ The list of files attached: ./work/ens_time_series.ps - Sample run output (ps version of ens time series) ./work/ens_time_series.pdf - Sample run output (pdf version of ens time series) -./work/mkmf_dart_to_model -./work/mkmf_dobsdx -./work/mkmf_gau_param -./work/mkmf_model_to_dart -./work/mkmf_x_plus_delta -./work/mkmf_create_fixed_network_seq -./work/mkmf_perfect_model_obs -./work/mkmf_filter - -./work/path_names_create_fixed_network_seq -./work/path_names_dart_to_model -./work/path_names_dobsdx -./work/path_names_filter -./work/path_names_gau_param -./work/path_names_model_to_dart -./work/path_names_perfect_model_obs -./work/path_names_x_plus_delta - - # # $URL$ diff --git a/models/lorenz_96_2scale/shell_scripts/run_expt.pl b/models/lorenz_96_2scale/shell_scripts/run_expt.pl deleted file mode 100755 index c60e187e51..0000000000 --- a/models/lorenz_96_2scale/shell_scripts/run_expt.pl +++ /dev/null @@ -1,122 +0,0 @@ -#!/usr/bin/perl -# -# DART software - Copyright UCAR. This open source software is provided -# by UCAR, "as is", without charge, subject to all terms of use at -# http://www.image.ucar.edu/DAReS/DART/DART_download -# -# DART $Id$ - -use File::Copy; - -my $ensemble_size = 100; -my $cutoff = 400.0; #cutoff radius for Schur product -my $cov_inflate = 1.1; #covariance inflation factor -my $num_groups = 10; #number of groups -my $first_obs = "0 43200"; #days, seconds -my $obs_period = "0 43200"; #days, seconds -my $num_periods = "24"; #number of obs times - -#my $startobs = 1; #first obs in set of X -#my $nobs = 36; #identity for X -#my $error_variance = 1.0; #better for X -my $startobs = 37; #first obs in set of Y -my $nobs = 360; #identity for Y -my $error_variance = 0.1; #better for Y - -my $max_obs = $nobs * $num_periods; - -# create a set def -my $command = "mkmf_create_obs_sequence\n"; -system "$command"; -copy "input.nml.create_obs_sequence_default", "input.nml"; - -open INFILE, ">tmp"; -print INFILE "$max_obs\n"; #max obs -print INFILE "0\n"; #0 copies for def -print INFILE "0\n"; #no QC -for $iob ( 1 .. $nobs ) { - print INFILE "$iob\n"; #denotes identity - $thisob = -($iob + $startobs - 1); #identity location - print INFILE "$thisob\n"; #denotes identity - print INFILE "0 0\n"; #secs days - print INFILE "$error_variance\n"; #error variance -} -print INFILE "-1\n"; #done -print INFILE "obs_seq_def.out\n"; #def file name -close INFILE; - -$command = "create_obs_sequence < tmp\n"; -system "$command"; -unlink "tmp"; - -# create an identity obs sequence -# propagate through times -my $command = "mkmf_create_fixed_network_seq\n"; -system "$command"; -copy "input.nml.create_fixed_network_seq_default", "input.nml"; - -open INFILE, ">tmp"; -print INFILE "obs_seq_def.out\n"; -print INFILE "1\n"; #flag regular interval -print INFILE "$num_periods\n"; -print INFILE "$first_obs\n"; #time of initial ob -print INFILE "$obs_period\n"; #time between obs -print INFILE "obs_seq.in\n"; #output file -close INFILE; - -$command = "create_fixed_network_seq < tmp \n"; -system "$command\n"; -unlink "tmp"; - -# create the obs -$command = "mkmf_perfect_model_obs"; -system "$command\n"; -my $template = "input.nml.perfect_model_obs_default"; -open INFILE, $template; -my $nlfile = "input.nml"; -open OUTFILE, ">$nlfile"; -while () { - s/start_from_restart\s+=\s+.\w.+/start_from_restart = .true./; - s/output_restart\s+=\s+.\w.+/output_restart = .true./; - s/restart_in_file_name\s+=\s+"\w+"/restart_in_file_name = \"perfect_ics\"/; - s/restart_out_file_name\s+=\s+"\w+"/restart_out_file_name = \"perfect_restart\"/; - print OUTFILE; -} -close OUTFILE; -close INFILE; - -my $command = "perfect_model_obs"; -system "$command"; - -#run the filter -$command = "mkmf_filter"; -system "$command\n"; -$template = "input.nml.filter_default"; -open INFILE, $template; -$nlfile = "input.nml"; -open OUTFILE, ">$nlfile"; -while () { - s/ens_size\s+=\s+\w+/ens_size = $ensemble_size/; - s/cutoff\s+=\s+\w+.\w+/cutoff = $cutoff/; - s/cov_inflate\s+=\s+\w+.\w+/cov_inflate = $cov_inflate/; - s/start_from_restart\s+=\s+.\w.+/start_from_restart = .true./; - s/output_restart\s+=\s+.\w.+/output_restart = .true./; - s/restart_in_file_name\s+=\s+"\w+"/restart_in_file_name = \"filter_ics\"/; - s/restart_out_file_name\s+=\s+"\w+"/restart_out_file_name = \"filter_restart\"/; - s/num_output_state_members\s+=\s+\w+/num_output_state_members = $ensemble_size/; - s/num_groups\s+=\s+\w+/num_groups = $num_groups/; - print OUTFILE; -} -close OUTFILE; -close INFILE; - -my $command = "filter"; -system "$command"; - -exit 0 - -# -# $URL$ -# $Date$ -# $Revision$ - diff --git a/models/lorenz_96_2scale/shell_scripts/spinup_model.pl b/models/lorenz_96_2scale/shell_scripts/spinup_model.pl deleted file mode 100755 index 93a7a0483c..0000000000 --- a/models/lorenz_96_2scale/shell_scripts/spinup_model.pl +++ /dev/null @@ -1,137 +0,0 @@ -#!/usr/bin/perl -# -# DART software - Copyright UCAR. This open source software is provided -# by UCAR, "as is", without charge, subject to all terms of use at -# http://www.image.ucar.edu/DAReS/DART/DART_download -# -# DART $Id$ - -use File::Copy; -use File::Path; - -my $num_spinup_days = 1000; - -my $ensemble_size = 100; -my $error_variance = 1.0; - -my $command = "mkmf_create_obs_sequence\n"; -system "$command"; -copy "input.nml.create_obs_sequence_default", "input.nml"; - -# create a set def -open INFILE, ">tmp"; -print INFILE "$num_spinup_days\n"; #max obs -print INFILE "0\n"; #0 copies for def -print INFILE "0\n"; #no QC -print INFILE "1\n"; #only one obs -print INFILE "-1\n"; #identity -print INFILE "0 0\n"; #secs days -print INFILE "100000\n"; #error variance -print INFILE "-1\n"; #done -print INFILE "obs_seq_def.out\n"; #def file name -close INFILE; - -$command = "create_obs_sequence < tmp\n"; -system "$command"; -unlink "tmp"; - -# create an identity obs sequence -# propagate through times -my $command = "mkmf_create_fixed_network_seq\n"; -system "$command"; -copy "input.nml.create_fixed_network_seq_default", "input.nml"; - -open INFILE, ">tmp"; -print INFILE "obs_seq_def.out\n"; -print INFILE "1\n"; #regularly repeating times -print INFILE "$num_spinup_days\n"; #days -print INFILE "1,0\n"; #time of initial ob (1 dy) -print INFILE "1,0\n"; #time between obs (1/dy) -print INFILE "obs_seq.in\n"; #output file -close INFILE; - -my $command = "create_fixed_network_seq < tmp \n"; -system "$command"; -unlink "tmp"; - -# init the model onto the attrctor -$command = "mkmf_perfect_model_obs"; -system "$command\n"; -$template = "input.nml.perfect_model_obs_default"; -open INFILE, $template; -my $nlfile = "input.nml"; -open OUTFILE, ">$nlfile"; -while () { - s/start_from_restart\s+=\s+.\w.+/start_from_restart = .false./; - s/output_restart\s+=\s+.\w.+/output_restart = .true./; - s/restart_in_file_name\s+=\s+"\w+"/restart_in_file_name = \"perfect_ics\"/; - s/restart_out_file_name\s+=\s+"\w+"/restart_out_file_name = \"perfect_restart\"/; - print OUTFILE; -} -close OUTFILE; -close INFILE; - -my $command = "perfect_model_obs"; -system "$command"; - -# generate a set of ICs -open INFILE, $template; -open OUTFILE, ">$nlfile"; -while () { - s/start_from_restart\s+=\s+.\w.+/start_from_restart = .true./; - s/output_restart\s+=\s+.\w.+/output_restart = .true./; - s/restart_in_file_name\s+=\s+"\w+"/restart_in_file_name = \"perfect_ics\"/; - s/restart_out_file_name\s+=\s+"\w+"/restart_out_file_name = \"perfect_restart\"/; - print OUTFILE; -} -close OUTFILE; -close INFILE; - -# save deterministic spun-up ICs -copy "perfect_restart","perfect_ics"; -# actually create the truth run -$command = "perfect_model_obs"; -system "$command"; - -# Generate an ensemble -$command = "mkmf_filter"; -system "$command\n"; -$template = "input.nml.filter_default"; -open INFILE, $template; -my $nlfile = "input.nml"; -open OUTFILE, ">$nlfile"; -while () { - s/ens_size\s+=\s+\w+/ens_size = $ensemble_size/; - s/cutoff\s+=\s+\w+.\w+/cutoff = 0.0/; - s/cov_inflate\s+=\s+\w+.\w+/cov_inflate = 1.0/; - s/start_from_restart\s+=\s+.\w.+/start_from_restart = .false./; - s/output_restart\s+=\s+.\w.+/output_restart = .true./; - s/restart_in_file_name\s+=\s+"\w+"/restart_in_file_name = \"perfect_ics\"/; - s/restart_out_file_name\s+=\s+"\w+"/restart_out_file_name = \"filter_restart\"/; - s/num_output_state_members\s+=\s+\w+/num_output_state_members = $ensemble_size/; - print OUTFILE; -} -close OUTFILE; -close INFILE; - -# spin up -$command = "filter"; -system "$command"; -# this will be the ICs for the truth run -copy "perfect_restart","perfect_ics"; -# these are the ICs to use in the ensemble -copy "filter_restart","filter_ics"; - -# save the diagnostic files so we can see the spinup -mkpath (["spinup"]); -copy "preassim.nc","spinup/preassim.nc"; -copy "analysis.nc","spinup/analysis.nc"; -copy "true_state.nc","spinup/true_state.nc"; - -exit 0 - -# -# $URL$ -# $Revision$ -# $Date$ - diff --git a/models/wrf/WRF_BC/README b/models/wrf/WRF_BC/README index adc0834235..24ac008247 100644 --- a/models/wrf/WRF_BC/README +++ b/models/wrf/WRF_BC/README @@ -28,12 +28,8 @@ wrfbdy_d01 will be OVERWRITTEN. ------------------------------------------------------------------ Compile: - - mkmf_update_wrf_bc - - make - - (make clean to remove objs and execs) + + ./quickbuild.sh Run: diff --git a/observations/obs_converters/AIRS/BUILD_HDF-EOS.sh b/observations/obs_converters/AIRS/BUILD_HDF-EOS.sh index af577b47cf..3cba5115f3 100755 --- a/observations/obs_converters/AIRS/BUILD_HDF-EOS.sh +++ b/observations/obs_converters/AIRS/BUILD_HDF-EOS.sh @@ -19,7 +19,7 @@ echo echo 'setenv("NCAR_INC_HDFEOS5", "/glade/u/apps/ch/opt/hdf-eos5/5.1.16/intel/19.0.5/include")' echo 'setenv("NCAR_LDFLAGS_HDFEOS5","/glade/u/apps/ch/opt/hdf-eos5/5.1.16/intel/19.0.5/lib")' echo 'setenv("NCAR_LIBS_HDFEOS5","-Wl,-Bstatic -lGctp -lhe5_hdfeos -lsz -lz -Wl,-Bdynamic")' -echo 'which we then use in mkmf_convert_airs_L2' +echo 'which we then use in when compiling convert_airs_L2' echo echo 'If you need to build the HDF-EOS and/or the HDF-EOS5 libraries, you may ' echo 'try to follow the steps outlined in this script. They will need to be ' diff --git a/observations/obs_converters/GSI2DART/readme.rst b/observations/obs_converters/GSI2DART/readme.rst index 8c397e6f91..3c8e521b50 100644 --- a/observations/obs_converters/GSI2DART/readme.rst +++ b/observations/obs_converters/GSI2DART/readme.rst @@ -56,8 +56,7 @@ Note that within ``GSI`` the source file ``kinds.F90`` has an upper-case ``F90`` suffix. Within the ``GSI2DART`` observation converter, it gets preprocessed into ``mykinds.f90`` with a lower-case ``f90`` suffix. Case-insensitive filesystems should be banned ... until then, it is more robust to implement some name change -during preprocessing. The path name specified -in ``GSI2DART/work/path_names_gsi_to_dart`` reflects this processed filename. +during preprocessing. The following three files had their open() statements modified to read 'BIG_ENDIAN' files without the need to compile EVERYTHING with @@ -95,16 +94,6 @@ radiance observation types. - Modified ``../../DEFAULT_obs_kind_mod.F90`` - Added ``../../forward_operators/obs_def_radiance_mod.f90`` which has radiance observation types -Compiler notes -~~~~~~~~~~~~~~ - -When using ifort, the Intel Fortran compiler, you may need to add the compiler -flag ``-nostdinc`` to avoid inserting the standard C include files which have -incompatible comment characters for Fortran. You can add this compiler flag -in the the ``GSI2DART/work/mkmf_gsi_to_dart`` file by adding it to the "-c" -string contents. - -*Please note: this was NOT needed for ifort version 19.0.5.281.* Additional files and directories ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -143,7 +132,7 @@ The converter has been tested with 64-bit reals as well as 32-bit reals This requires changes in two places: 1. ``DART/assimilation_code/modules/utilities/types_mod.f90`` change required: r8 = r4 -2. ``GSI2DART/work/mkmf_gsi_to_dart`` change required: ``-D_REAL4_`` +2. ``GSI2DART/work/quickbuild.sh`` change required: ``-D_REAL4_`` If these are not set in a compatible fashion, you will fail to compile with the following error (or something similar): diff --git a/observations/obs_converters/NCEP/ascii_to_obs/create_real_obs.rst b/observations/obs_converters/NCEP/ascii_to_obs/create_real_obs.rst index 96c1101d99..e279d7b477 100644 --- a/observations/obs_converters/NCEP/ascii_to_obs/create_real_obs.rst +++ b/observations/obs_converters/NCEP/ascii_to_obs/create_real_obs.rst @@ -16,12 +16,8 @@ etc.) and the DART observation variables (U, V, T, Q, Ps) which are specified in Instructions ------------ -- Go to DART/observations/NCEP/ascii_to_obs/work -- Use ``quickbuild.sh`` to compile all executable programs in the directory. To rebuild just one program: - - - Use ``mkmf_create_real_obs`` to generate the makefile to compile ``create_real_obs.f90``. - - Type ``make`` to get the executable. - +- Go to DART/observations/obs_converters/NCEP/ascii_to_obs/work +- Use ``quickbuild.sh`` to compile all executable programs in the directory. - Make appropriate changes to the ``&ncep_obs_nml`` namelist in ``input.nml``, as follows. - run ``create_real_obs``. @@ -258,7 +254,6 @@ Modules used Files ----- -- path_names_create_real_obs; the list of modules used in the compilation of create_real_obs. - temp_obs.yyyymmdd; (input) NCEP BUFR (decoded/intermediate) observation file(s) Each one has 00Z of the next day on it. - input.nml; the namelist file used by create_real_obs. @@ -267,4 +262,4 @@ Files References ---------- -- .../DART/observations/NCEP/prep_bufr/docs/\* (NCEP text files describing the BUFR files) +- DART/observations/obs_converters/NCEP/prep_bufr/docs (NCEP text files describing the BUFR files) diff --git a/observations/obs_converters/NSIDC/work/quickbuild.csh b/observations/obs_converters/NSIDC/work/quickbuild.csh deleted file mode 100755 index 51469775fc..0000000000 --- a/observations/obs_converters/NSIDC/work/quickbuild.csh +++ /dev/null @@ -1,62 +0,0 @@ -#!/bin/csh -# -# DART software - Copyright UCAR. This open source software is provided -# by UCAR, "as is", without charge, subject to all terms of use at -# http://www.image.ucar.edu/DAReS/DART/DART_download -# -# This script compiles all executables in this directory. - -#---------------------------------------------------------------------- -# 'preprocess' is a program that culls the appropriate sections of the -# observation module for the observations types in 'input.nml'; the -# resulting source file is used by all the remaining programs, -# so this MUST be run first. -#---------------------------------------------------------------------- - -set nonomatch -\rm -f preprocess *.o *.mod Makefile - -set MODEL = "SMAP converter" - -@ n = 1 - -echo -echo -echo "---------------------------------------------------------------" -echo "${MODEL} build number ${n} is preprocess" - -csh mkmf_preprocess -make || exit $n - -./preprocess || exit 99 - -#---------------------------------------------------------------------- -# Build all the single-threaded targets -#---------------------------------------------------------------------- - -foreach TARGET ( mkmf_* ) - - set PROG = `echo $TARGET | sed -e 's#mkmf_##'` - - switch ( $TARGET ) - case mkmf_preprocess: - breaksw - default: - @ n = $n + 1 - echo - echo "---------------------------------------------------" - echo "${MODEL} build number ${n} is ${PROG}" - \rm -f ${PROG} - csh $TARGET || exit $n - make || exit $n - breaksw - endsw -end - -\rm -f *.o *.mod input.nml*_default Makefile .cppdefs - -echo "Success: All ${MODEL} programs compiled." - -exit 0 - - diff --git a/observations/obs_converters/SIF/SIF_to_obs_netcdf.rst b/observations/obs_converters/SIF/SIF_to_obs_netcdf.rst index 6948ab4aff..a6235d0178 100644 --- a/observations/obs_converters/SIF/SIF_to_obs_netcdf.rst +++ b/observations/obs_converters/SIF/SIF_to_obs_netcdf.rst @@ -30,7 +30,7 @@ Standard workflow: #. Make note of the SIF wavelength the data is centered upon. This information is included in the SIF variable of netcdf file ``SIF_740_daily_corr`` #. Build the DART executables with support for land observations. This is done by running - ``preprocess`` with ``obs_def_land_mod.f90`` in the list of ``input_files`` for + ``quickbuild.sh`` with ``obs_def_land_mod.f90`` in the list of ``input_files`` for ``preprocess_nml``. #. Provide basic information via the ``SIF_to_obs_netcdf_nml`` (e.g. verbose, wavelength) #. Convert single or multiple SIF netcdf data files using ``SIF_to_obs_netcdf``. Converting @@ -193,17 +193,3 @@ to generate a long-term global high spatial-resolution solar-induced chlorophyll fluorescence (SIF)." Remote Sensing of Environment 239 (2020): 111644.https://doi.org/10.1016/j.rse.2020.111644 - - - -Programs --------- - -The ``SIF_to_obs_netcdf.f90`` file is the source for the main converter program. -To compile and test, go into the work subdirectory and run ``mkmf_preprocess``, run -the ``Makefile`` and finally run ``preprocess``. Be sure that ``obs_def_land_mod.f90`` -is included as an input file within ``&preprocess_nml`` of the ``input.nml``. - -Next compile the observation converter by running ``mkmf_SIF_to_obs_netcdf``, run -``Makefile``, and finally run ``SIF_to_obs_netcdf``. - diff --git a/observations/obs_converters/quikscat/QuikSCAT.rst b/observations/obs_converters/quikscat/QuikSCAT.rst index 4bb8996550..e942f9a8e8 100644 --- a/observations/obs_converters/quikscat/QuikSCAT.rst +++ b/observations/obs_converters/quikscat/QuikSCAT.rst @@ -5,12 +5,12 @@ Overview -------- NASA's QuikSCAT mission is described in -`http://winds.jpl.nasa.gov/missions/quikscat/ `__. "QuikSCAT" +`Quick Scatteromoeter `_. "QuikSCAT" refers to the satellite, "SeaWinds" refers to the instrument that provides near-surface wind speeds and directions over large bodies of water. QuikSCAT has an orbit of about 100 minutes, and the SeaWinds microwave radar covers a swath under the satellite. The swath is comprised of successive scans (or rows) and each scan has many wind-vector-cells (WVCs). For the purpose of this document, we will focus only the **Level 2B** product at 25km resolution. If you go to the official -JPL data distribution site http://podaac.jpl.nasa.gov/DATA_CATALOG/quikscatinfo.html , we are using the product labelled +JPL data distribution site `podaac.jpl.nasa.gov `_, we are using the product labelled **L2B OWV 25km Swath**. Each orbit consists of (potentially) 76 WVCs in each of 1624 rows or scans. The azimuthal diversity of the radar returns affects the error characteristics of the retrieved wind speeds and directions, as does rain, interference of land in the radar footprint, and very low wind speeds. Hence, not all wind retrievals are created @@ -18,7 +18,7 @@ equal. The algorithm that converts the 'sigma naughts' (the measure of radar backscatter) into wind speeds and directions has multiple solutions. Each candidate solution is called an 'ambiguity', and there are several ways of choosing 'the best' -ambiguity. Beauty is in the eye of the beholder. At present, the routine to convert the original L2B data files (one per +ambiguity. At present, the routine to convert the original L2B data files (one per orbit) in HDF format into the DART observation sequence file makes several assumptions: #. All retrievals are labelled with a 10m height, in accordance with the retrieval algorithm. @@ -34,13 +34,13 @@ orbit) in HDF format into the DART observation sequence file makes several assum Data sources ------------ -The NASA Jet Propulsion Laboratory (JPL) `data repository `__ has a +The NASA Jet Propulsion Laboratory (JPL) `data repository `_ has a collection of animations and data sets from this instrument. In keeping with NASA tradition, these data are in HDF -format (specifically, HDF4), so if you want to read these files directly, you will need to install the HDF4 libraries -(which can be downloaded from http://www.hdfgroup.org/products/hdf4/) +format (specifically, HDF4), so if you want to read these files directly, you will need to install the +`HDF4 libraries `_. If you go to the official JPL data distribution site http://podaac.jpl.nasa.gov/DATA_CATALOG/quikscatinfo.html, we are -using the product labelled **L2B OWV 25km Swath**. They are organized in folders by day ... with each orbit (each +using the product labelled **L2B OWV 25km Swath**. They are organized in folders by day, with each orbit (each revolution) in one compressed file. There are 14 revolutions per day. The conversion to DART observation sequence format is done on each revolution, multiple revolutions may be combined 'after the fact' by any ``obs_sequence_tool`` in the ``work`` directory of any model. @@ -49,44 +49,46 @@ Programs -------- There are several programs that are distributed from the JPL www-site, -ftp://podaac.jpl.nasa.gov/pub/ocean_wind/quikscat/L2B/sw/; we specifically started from the Fortran file -`read_qscat2b.f `__ and modified it to -be called as a subroutine to make it more similar to the rest of the DART framework. The original ``Makefile`` and -``read_qscat2b.f`` are included in the DART distribution in the ``DART/observations/quikscat`` directory. You will have -to modify the ``Makefile`` to build the executable. +ftp://podaac.jpl.nasa.gov/pub/ocean_wind/quikscat/L2B/sw/; we modified the Fortran file +`read_qscat2b.f `__ +to be a subroutine for use with DART. For reference, the original ``read_qscat2b.f`` and ``Makefile`` +are included in ``DART/observations/quikscat`` directory. + convert_L2b.f90 ~~~~~~~~~~~~~~~ -``convert_L2b`` is the executable that reads the HDF files distributed by JPL. ``DART/observations/quikscat/work`` has -the expected ``mkmf_convert_L2b`` and ``path_names_convert_L2b`` files and compiles the executable in the typical DART -fashion - with one exception. The location of the HDF (and possible dependencies) installation must be conveyed to the -``mkmf`` build mechanism. Since this information is not required by the rest of DART, it made sense (to me) to isolate -it in the ``mkmf_convert_L2b`` script. **It will be necessary to modify the ``mkmf_convert_L2b`` script to be able to -build ``convert_L2b``**. In particular, you will have to change the two lines specifying the location of the HDF (and -probably the JPG) libraries. The rest of the script should require little, if any, modification. +``convert_L2b`` converts the HDF files distributed by JPL to an obs_sequence file. +To build ``convert_l2b`` using ``quickbuild.sh`` you will first need to build the HDF4 library. + +.. warning:: + + To avoid conflicts with netCDF library required by DART, we recommend building HDF4 *without* + the HDF4 versions of the NetCDF API. -.. container:: routine +After successfully building HDF, add the appropriate library flags to your mkmf.template file. +Below is a snippet from an mkmf.template file used to link to both NetCDF and HDF4. + +.. code:: text + + NETCDF = /glade/u/apps/ch/opt/netcdf/4.8.1/intel/19.1.1 + HDF = /glade/p/cisl/dares/libraries/hdf + + INCS = -I$(NETCDF)/include -I$(HDF)/include + LIBS = -L$(NETCDF)/lib -lnetcdff -lnetcdf -L$(HDF)/lib -lmfhdf -ljpeg + FFLAGS = -O -assume buffered_io $(INCS) + LDFLAGS = $(FFLAGS) $(LIBS) - set JPGDIR = */contrib/jpeg-6b_gnu-4.1.2-64* - set HDFDIR = */contrib/hdf-4.2r4_gnu-4.1.2-64* There are a lot of observations in every QuikSCAT orbit. Consequently, the observation sequence files are pretty large - particularly if you use the ASCII format. Using the binary format (i.e. *obs_sequence_nml:write_binary_obs_sequence = .true.*) will result in observation sequence files that are about *half* the size of the ASCII format. Since there are about 14 QuikSCAT orbits per day, it may be useful to convert individual orbits to an observation -sequence file and then concatenate multiple observation sequence files into one file per day. This may be trivially -accomplished with the ``obs_sequence_tool`` program in any ``model/xxxx/work`` directory. Be sure to include the -``'../../../obs_def/obs_def_QuikSCAT_mod.f90'`` string in ``input.nml&preprocess_nml:input_files`` when you run -``preprocess``. - -Obs_to_table.f90, plot_wind_vectors.m -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +sequence file and then concatenate multiple observation sequence files into one file per day. This can be +accomplished with the :ref:`obs_sequence_tool` program. To build the ``obs_sequence_tool``, +add ``obs_sequence_tool`` to the list of programs in ``quickbuid.sh``. -``DART/diagnostics/threed_sphere/obs_to_table.f90`` is a potentially useful tool. You can run the observation sequence -files through this filter to come up with a 'XYZ'-like file that can be readily plotted with -``DART/diagnostics/matlab/plot_wind_vectors.m``. Namelist --------