-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Costing Model Test Harness #1324
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1324 +/- ##
=======================================
Coverage 94.38% 94.38%
=======================================
Files 371 371
Lines 37941 37941
=======================================
Hits 35812 35812
Misses 2129 2129 ☔ View full report in Codecov by Sentry. |
I think this should have some discussion through reviews, a couple comments
|
""" | ||
|
||
|
||
class UnitCostingTestHarness(abc.ABC): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shares a lot of code with the UnitTestHarness
. Can it inherit from it instead?
@lbianchi-lbl do you foresee any issues with using inheritance here given how pytest
classes work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My comment above about sharing huge amounts of code with UnitTestHarness
still stands. Every method is largely similar -- I would suggest factoring out the commonalities into a base class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea I agree, just waiting to get back around to this in order to implement it.
I think not testing the scaling factors is okay. The documentation should still be shared. I am generally not sure about separating the costing and unit model tests (or functions themselves). The current location of the unit model costing methods is mostly historical -- they were inside the costing package originally and got moved into separate modules (#835). I think we should consider if the costing modules for the unit models in WaterTAP should be closer to the unit models themselves. |
There's nothing stopping calling the separate |
Note, should be behind #1302 |
Accidentally deleted branch when cleaning up, so just restoring |
@hunterbarber is this still active? Currently it is not on any release board. |
Yes, it was waiting for some 1.0 changes to avoid conflicts before becoming active again. |
@hunterbarber @bknueven Is there really a need to create a CostingModelTestHarness in the first place? It seems like the current version of the UnitModelTestHarness is already capable of testing costing. See the following example:
In other words, what additional functionality would we want the costing test harness to have that the UnitModelTestHarness doesn't have already? |
I actually hadn't seen this, but good to know that at least the values can be checked with UnitModelTestHarness. So let's assume the solved variable values and the supporting model consistency information like units, dof, and etc. can be checked with the UnitTestHarness if m.fs.unit.costing is built on m.fs.unit. Is it worth separating costing tests to have clarity whether a failing test is a consequence of the unit or costing model? They in theory exist as separate products although are frequently used together. Also it becomes more relevant for costing models that can be used for multiple unit models like RO. Is it better to reduce test workflow by using an isolated 1 test attaching to the costing code or have to implement it multiple times within the unit model tests of each applicable unit model. I guess in this case the unit model test harness could still be the only one used. Just my initial thoughts. |
When you say "Also it becomes more relevant for costing models that can be used for multiple unit models like RO", are you referring to unit models that have multiple costing methods? If so, you can see how its handled in the crystallizer file, which leads to your question. Is it better to have separate test(s) for the costing methods that aren't lumped together with the unit model tests or is it better to just lump everything together (what is currently done). Personally, I don't mind having everything lumped together, and I think it makes writing the tests a bit simpler |
I was actually referring to the opposite, 1 costing model that is used across 2+ unit models. If the costing models are tested only at the unit model level, then the costing model testing would be redundant to some extent (e.g. costing/reverse_osmosis being tested multiple times across units/reverse_osmosis_0D, units/reverse_osmosis_1D, units/oaro_0D, and units/oaro_1D). Disclaimer that all of these are just my thoughts and I very well could be wrong: I think having independent testing in units/tests and costing/tests can more quickly isolate problems (on the unit model or costing model) when tests fail, and reduce the length of the testing workflow in general. With that being said, I think both could use this Also I was trying to unify costing model overall just because I don't think there's any real consistency at the moment. |
@bknueven Any thoughts on the conversation above? I agree with Hunter that we should have independent tests in unit_models/tests and costing/unit_models/tests, but is it fine to use the UnitTestHarness for both purposes? If so, should the UnitTestHarness just stay as it is (see crystallization implementation above) or is there additional functionality we want to add for testing unit costing? |
I think testing the costing with each unit model is okay, even if redundant. Testing the costing methods without the unit models certainly seems possible, but also a lot of effort. And in any case, you would want to test the costing methods with each unit model anyway to make sure there's compatibility in the different versions. I am also not sure if it needs a separate harness. Maybe it should just be optional functionality in the unit test hardness, which already has all the information you need. |
@bknueven @hunterbarber Alright, so it seems to me like costing/unit_model/tests would test costing methods w/o unit models and unit_models/tests would test costing methods with the corresponding unit model. Both cases could just use the current iteration of this UnitTestHarness, so I think we can close this PR. The only issue now is actually adding the costing/unit_model/tests testing and making sure the unit model tests actually test costing. As Ben mentioned, testing the costing methods w/o the unit models would require a decent bit of effort/time, and it doesn't strike me as a high-priority task, so I'm not sure when this would get done. Ensuring the unit_models/tests costing actually test the costing method(s) also doesn't seem to be high-priority, although it would take much less work than the former. |
Closing this PR since the UnitModelTestHarness can already handle adding unit model costing tests. For an example, see
|
Fixes/Resolves:
Similarly implement a costing model test harness, adapted from the unit model test harness
Summary/Motivation:
Systematically test unit model costing
Changes proposed in this PR:
Legal Acknowledgement
By contributing to this software project, I agree to the following terms and conditions for my contribution: