Skip to content

Commit

Permalink
azs - update README with consistent bullets
Browse files Browse the repository at this point in the history
  • Loading branch information
zsarnoczay committed Apr 4, 2024
1 parent a446d49 commit b8a22f0
Showing 1 changed file with 61 additions and 61 deletions.
122 changes: 61 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,116 +150,116 @@ Feel free to [open an issue](https://github.com/NHERI-SimCenter/pelicun/issues/n

### Changes in v3.2

* Changes that might affect backwards compatibility:
- Changes that might affect backwards compatibility:

* Unit information is included in every output file. If you parse Pelicun outputs and did not anticipate a Unit entry, your parser might need an update.
- Unit information is included in every output file. If you parse Pelicun outputs and did not anticipate a Unit entry, your parser might need an update.

* Decision variable types in the repair consequence outputs are named using CamelCase rather than all capitals to be consistent with other parts of the codebase. For example, we use "Cost" instead of "COST". This might affect post-processing scripts.
- Decision variable types in the repair consequence outputs are named using CamelCase rather than all capitals to be consistent with other parts of the codebase. For example, we use "Cost" instead of "COST". This might affect post-processing scripts.

* For clarity, "ea" units were replaced with "unitless" where appropriate. There should be no practical difference between the calculations due to this change. Interstory drift ratio demand types are one example.
- For clarity, "ea" units were replaced with "unitless" where appropriate. There should be no practical difference between the calculations due to this change. Interstory drift ratio demand types are one example.

* Weighted component block assignment is no longer supported. We recommend using more versatile multiple component definitions (see new feature below) to achieve the same effect.
- Weighted component block assignment is no longer supported. We recommend using more versatile multiple component definitions (see new feature below) to achieve the same effect.

* Damage functions (i.e., assign quantity of damage as a function of demand) are no longer supported. We recommend using the new multilinear CDF feature to develop theoretically equivalent, but more efficient models.
- Damage functions (i.e., assign quantity of damage as a function of demand) are no longer supported. We recommend using the new multilinear CDF feature to develop theoretically equivalent, but more efficient models.

* New multilinear CDF Random Variable allows using the multilinear approximation of any CDF in the tool.
- New multilinear CDF Random Variable allows using the multilinear approximation of any CDF in the tool.

* Capacity adjustment allows adjusting (scaling or shifting) default capacities (i.e., fragility curves) with factors specific to each Performance Group.
- Capacity adjustment allows adjusting (scaling or shifting) default capacities (i.e., fragility curves) with factors specific to each Performance Group.

* Support for multiple definitions of the same component at the same location-direction. This feature facilitates adding components with different block sizes to the same floor or defining multiple tenants on the same floor, each with their own set of components.
- Support for multiple definitions of the same component at the same location-direction. This feature facilitates adding components with different block sizes to the same floor or defining multiple tenants on the same floor, each with their own set of components.

* Support for cloning demands, that is, taking a provided demand dataset, creating a copy and considering it as another demand. For example, you can provide results of seismic response in the X direction and automatically prepare a copy of them to represent results in the Y direction.
- Support for cloning demands, that is, taking a provided demand dataset, creating a copy and considering it as another demand. For example, you can provide results of seismic response in the X direction and automatically prepare a copy of them to represent results in the Y direction.

* Added a comprehensive suite of more than 140 unit tests that cover more than 93% of the codebase. Tests are automatically executed after every commit using GitHub Actions and coverage is monitored through `Codecov.io`. Badges at the top of the Readme show the status of tests and coverage. We hope this continuous integration facilitates editing and extending the existing codebase for interested members of the community.
- Added a comprehensive suite of more than 140 unit tests that cover more than 93% of the codebase. Tests are automatically executed after every commit using GitHub Actions and coverage is monitored through `Codecov.io`. Badges at the top of the Readme show the status of tests and coverage. We hope this continuous integration facilitates editing and extending the existing codebase for interested members of the community.

* Completed a review of the entire codebase using `flake8` and `pylint` to ensure PEP8 compliance. The corresponding changes yielded code that is easier to read and use. See guidance in Readme on linting and how to ensure newly added code is compliant.
- Completed a review of the entire codebase using `flake8` and `pylint` to ensure PEP8 compliance. The corresponding changes yielded code that is easier to read and use. See guidance in Readme on linting and how to ensure newly added code is compliant.

* Models for estimating Environmental Impact (i.e., embodied carbon and energy) of earthquake damage as per FEMA P-58 are included in the DL Model Library and available in this release.
- Models for estimating Environmental Impact (i.e., embodied carbon and energy) of earthquake damage as per FEMA P-58 are included in the DL Model Library and available in this release.

* "ListAllDamageStates" option allows you to print a comprehensive list of all possible damage states for all components in the columns of the DMG output file. This can make parsing the output easier but increases file size. By default, this option is turned off and only damage states that affect at least one block are printed.
- "ListAllDamageStates" option allows you to print a comprehensive list of all possible damage states for all components in the columns of the DMG output file. This can make parsing the output easier but increases file size. By default, this option is turned off and only damage states that affect at least one block are printed.

* Damage and Loss Model Library
- Damage and Loss Model Library

* A collection of parameters and metadata for damage and loss models for performance based engineering. The library is available and updated regularly in the DB_DamageAndLoss GitHub Repository.
- A collection of parameters and metadata for damage and loss models for performance based engineering. The library is available and updated regularly in the DB_DamageAndLoss GitHub Repository.

* This and future releases of Pelicun have the latest version of the library at the time of their release bundled with them.
- This and future releases of Pelicun have the latest version of the library at the time of their release bundled with them.

* DL_calculation tool
- DL_calculation tool

* Support for combination of built-in and user-defined databases for damage and loss models.
- Support for combination of built-in and user-defined databases for damage and loss models.

* Results are now also provided in standard SimCenter `JSON` format besides the existing `CSV` tables. You can specify the preferred format in the configuration file under Output/Format. The default file format is still CSV.
- Results are now also provided in standard SimCenter `JSON` format besides the existing `CSV` tables. You can specify the preferred format in the configuration file under Output/Format. The default file format is still CSV.

* Support running calculations for only a subset of available consequence types.
- Support running calculations for only a subset of available consequence types.

* Several error and warning messages added to provide more meaningful information in the log file when something goes wrong in a simulation.
- Several error and warning messages added to provide more meaningful information in the log file when something goes wrong in a simulation.

* Update dependencies to more recent versions.
- Update dependencies to more recent versions.

* The online documentation is significantly out of date. While we are working on an update, we recommend using the documentation of the [DL panel in SimCenter's PBE Tool](https://nheri-simcenter.github.io/PBE-Documentation/common/user_manual/usage/desktop/PBE/Pelicun.html) as a resource.
- The online documentation is significantly out of date. While we are working on an update, we recommend using the documentation of the [DL panel in SimCenter's PBE Tool](https://nheri-simcenter.github.io/PBE-Documentation/common/user_manual/usage/desktop/PBE/Pelicun.html) as a resource.

### Changes in v3.1

* Calculation settings are now assessment-specific. This allows you to use more than one assessments in an interactive calculation and each will have its own set of options, including log files.
- Calculation settings are now assessment-specific. This allows you to use more than one assessments in an interactive calculation and each will have its own set of options, including log files.

* The uq module was decoupled from the others to enable standalone uq calculations that work without having an active assessment.
- The uq module was decoupled from the others to enable standalone uq calculations that work without having an active assessment.

* A completely redesigned DL_calculation.py script that provides decoupled demand, damage, and loss assessment and more flexibility when setting up each of those when pelicun is used with a configuration file in a larger workflow.
- A completely redesigned DL_calculation.py script that provides decoupled demand, damage, and loss assessment and more flexibility when setting up each of those when pelicun is used with a configuration file in a larger workflow.

* Two new examples that use the DL_calculation.py script and a json configuration file were added to the example folder.
- Two new examples that use the DL_calculation.py script and a json configuration file were added to the example folder.

* A new example that demonstrates a detailed interactive calculation in a Jupyter notebook was added to the following DesignSafe project: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-3411v5 This project will be extended with additional examples in the future.
- A new example that demonstrates a detailed interactive calculation in a Jupyter notebook was added to the following DesignSafe project: https://www.designsafe-ci.org/data/browser/public/designsafe.storage.published/PRJ-3411v5 This project will be extended with additional examples in the future.

* Unit conversion factors moved to an external file (settings/default_units) to make it easier to add new units to the list. This also allows redefining the internal units through a complete replacement of the factors. The internal units continue to follow the SI system.
- Unit conversion factors moved to an external file (settings/default_units) to make it easier to add new units to the list. This also allows redefining the internal units through a complete replacement of the factors. The internal units continue to follow the SI system.

* Substantial improvements in coding style using flake8 and pylint to monitor and help enforce PEP8.
- Substantial improvements in coding style using flake8 and pylint to monitor and help enforce PEP8.

* Several performance improvements made calculations more efficient, especially for large problems, such as regional assessments or tall buildings investigated using the FEMA P-58 methodology.
- Several performance improvements made calculations more efficient, especially for large problems, such as regional assessments or tall buildings investigated using the FEMA P-58 methodology.

* Several bugfixes and a large number of minor changes that make the engine more robust and easier to use.
- Several bugfixes and a large number of minor changes that make the engine more robust and easier to use.

* Update recommended Python version to 3.10 and other dependencies to more recent versions.
- Update recommended Python version to 3.10 and other dependencies to more recent versions.

### Changes in v3.0

* The architecture was redesigned to better support interactive calculation and provide a low-level integration across all supported methods. This is the first release with the new architecture. Frequent updates are planned to provide additional examples, tests, and bugfixes in the next few months.
- The architecture was redesigned to better support interactive calculation and provide a low-level integration across all supported methods. This is the first release with the new architecture. Frequent updates are planned to provide additional examples, tests, and bugfixes in the next few months.

* New `assessment` module introduced to replace `control` module:
* Provides a high-level access to models and their methods
* Integrates all types of assessments into a uniform approach
* Most of the methods from the earlier `control` module were moved to the `model` module
- New `assessment` module introduced to replace `control` module:
- Provides a high-level access to models and their methods
- Integrates all types of assessments into a uniform approach
- Most of the methods from the earlier `control` module were moved to the `model` module

* Decoupled demand, damage, and loss calculations:
* Fragility functions and consequence functions are stored in separate files. Added new methods to the `db` module to prepare the corresponding data files and re-generated such data for FEMA P58 and Hazus earthquake assessments. Hazus hurricane data will be added in a future release.
* Decoupling removed a large amount of redundant data from supporting databases and made the use of HDF and json files for such data unnecessary. All data are stored in easy-to-read csv files.
* Assessment workflows can include all three steps (i.e., demand, damage, and loss) or only one or two steps. For example, damage estimates from one analysis can drive loss calculations in another one.
- Decoupled demand, damage, and loss calculations:
- Fragility functions and consequence functions are stored in separate files. Added new methods to the `db` module to prepare the corresponding data files and re-generated such data for FEMA P58 and Hazus earthquake assessments. Hazus hurricane data will be added in a future release.
- Decoupling removed a large amount of redundant data from supporting databases and made the use of HDF and json files for such data unnecessary. All data are stored in easy-to-read csv files.
- Assessment workflows can include all three steps (i.e., demand, damage, and loss) or only one or two steps. For example, damage estimates from one analysis can drive loss calculations in another one.

* Integrated damage and loss calculation across all methods and components:
* This includes phenomena such as collapse, including various collapse modes, and irreparable damage.
* Cascading damages and other interdependencies between various components can be introduced using a damage process file.
* Losses can be driven by damages or demands. The former supports the conventional damage->consequence function approach, while the latter supports the use of vulnerability functions. These can be combined within the same analysis, if needed.
* The same loss component can be driven by multiple types of damages. For example, replacement can be triggered by either collapse or irreparable damage.
- Integrated damage and loss calculation across all methods and components:
- This includes phenomena such as collapse, including various collapse modes, and irreparable damage.
- Cascading damages and other interdependencies between various components can be introduced using a damage process file.
- Losses can be driven by damages or demands. The former supports the conventional damage->consequence function approach, while the latter supports the use of vulnerability functions. These can be combined within the same analysis, if needed.
- The same loss component can be driven by multiple types of damages. For example, replacement can be triggered by either collapse or irreparable damage.

* Introduced *Options* in the configuration file and in the `base` module:
* These options handle settings that concern pelicun behavior;
* general preferences that might affect multiple assessment models;
* and settings that users would not want to change frequently.
* Default settings are provided in a `default_config.json` file. These can be overridden by providing any of the prescribed keys with a user-defined value assigned to them in the configuration file for an analysis.
- Introduced *Options* in the configuration file and in the `base` module:
- These options handle settings that concern pelicun behavior;
- general preferences that might affect multiple assessment models;
- and settings that users would not want to change frequently.
- Default settings are provided in a `default_config.json` file. These can be overridden by providing any of the prescribed keys with a user-defined value assigned to them in the configuration file for an analysis.

* Introduced consistent handling of units. Each csv table has a standard column to describe units of the data in it. If the standard column is missing, the table is assumed to use SI units.
- Introduced consistent handling of units. Each csv table has a standard column to describe units of the data in it. If the standard column is missing, the table is assumed to use SI units.

* Introduced consistent handling of pandas MultiIndex objects in headers and indexes. When tabular data is stored in csv files, MultiIndex objects are converted to simple indexes by concatenating the strings at each level and separating them with a `-`. This facilitates post-processing csv files in pandas without impeding post-processing those files in non-Python environments.
- Introduced consistent handling of pandas MultiIndex objects in headers and indexes. When tabular data is stored in csv files, MultiIndex objects are converted to simple indexes by concatenating the strings at each level and separating them with a `-`. This facilitates post-processing csv files in pandas without impeding post-processing those files in non-Python environments.

* Updated the DL_calculation script to support the new architecture. Currently, only the config file input is used. Other arguments were kept in the script for backwards compatibility; future updates will remove some of those arguments and introduce new ones.
- Updated the DL_calculation script to support the new architecture. Currently, only the config file input is used. Other arguments were kept in the script for backwards compatibility; future updates will remove some of those arguments and introduce new ones.

* The log files were redesigned to provide more legible and easy-to-read information about the assessment.
- The log files were redesigned to provide more legible and easy-to-read information about the assessment.

### Changes in v2.6

* Support EDPs with more than 3 characters and/or a variable in their name. For example, SA_1.0 or SA_T1
* Support fitting normal distribution to raw EDP data (lognormal was already available)
* Extract key settings to base.py to make them more accessible for users.
* Minor bugfixes mostly related to hurricane storm surge assessment
- Support EDPs with more than 3 characters and/or a variable in their name. For example, SA_1.0 or SA_T1
- Support fitting normal distribution to raw EDP data (lognormal was already available)
- Extract key settings to base.py to make them more accessible for users.
- Minor bugfixes mostly related to hurricane storm surge assessment

## License

Expand Down

0 comments on commit b8a22f0

Please sign in to comment.