-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarifier, Crystallizer & RO Unit Test Harness #1301
Clarifier, Crystallizer & RO Unit Test Harness #1301
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1301 +/- ##
=======================================
Coverage 94.39% 94.39%
=======================================
Files 371 371
Lines 37922 37922
=======================================
Hits 35796 35796
Misses 2126 2126 ☔ View full report in Codecov by Sentry. |
@bknueven I've been seeing this error locally as well, but it's not consistent and I'm not exactly sure what it means: AttributeError: 'NoneType' object has no attribute 'config' (see Checks / pytest (user mode) (py3.11/linux)). For instance, if I run the test 3 times, it may work perfectly fine the first 2, but then fail on the third. I'm able to resolve the failure locally by cutting the code and just repasting it. I'm not sure exactly what that means, but I imagine it's a bug we want to resolve before merging this PR. Occasionally this error will also cause some of the following tests to fail. |
I will look into this further. That said, I do have a question: why is the unit test harness not utilizing the built-in IDAES |
If we were to use the |
…sHolly/watertap into unit_test_harness_examples
@bknueven I'm not able to replicate these initialization failures on my Windows computer. Thoughts? |
I would suggest we don't change the scaling from the test in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything LGTM.
Looking through this implementation here only made me rethink about the decision of tolerances applied at the test harness level and how many sig figs should be written when establishing the unit model solutions. Here it seems some are 1 (e.g., 1e6) while some carried out as many figures from a copy and pasted value, which works for all these cases seemingly. But discussing this to some extent obviously doesn't hold this up.
Please look at the indirect coverage changes; these are lines that were previously covered and no longer are: https://app.codecov.io/gh/watertap-org/watertap/pull/1301/indirect-changes. I think it's okay to have a few one-off tests in addition to those in the harness. The decrease in coverage is not insignificant for the affected models. Alternatively, maybe this means there is additional functionality that should be added to the unit test harness. |
def test_reporting(self): | ||
m = build() | ||
m.fs.unit.report() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is the same in every example, I would suggest adding a test_reporting
to the test harness itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably will just wait until this change is merged to update #1321, the change in coverage that is causing coverage to fail is only on the report function
Not exactly sure what's causing the current failures, but it's also appearing in #1295 ... |
transformation_scheme="BACKWARD", | ||
transformation_method="dae.finite_difference", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
transformation_scheme="BACKWARD", | |
transformation_method="dae.finite_difference", |
If we remove these lines for one of the build
functions, then this PR will have no change in coverage. It won't affect how the model is built since these are the defaults.
https://app.codecov.io/gh/watertap-org/watertap/pull/1301/indirect-changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright, will do. Thanks!
Summary/Motivation:
Applies the unit test harness (introduced in #1277) to the clarifier, crystallizer, and RO models.
Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: