-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[F32 Update] Fix for DelnFlux & DlenFluxNoSG #33
base: develop
Are you sure you want to change the base?
[F32 Update] Fix for DelnFlux & DlenFluxNoSG #33
Conversation
Use `np.power` to insure proper type casting of power op Verbose the half damp calculation
Though this work comes from the push to match GEOS @ f32 capacities, this should pass the f64 data set from FV3GFS and therefore be mergeable as-is. |
Hi, @FlorianDeconinck Is this primarily to address round-off global mass errors? |
This is to remap the calculation to Fortran so we narrow the difference at 32-bit. Though the calculation is mathematically similar, at 32-bit the negative mult of |
As always we have to follow the Fortran numerically up until we are confident our dynamics have reached feature parity so we can track any porting mistakes. With the introduction of the 32-bit float, we need to keep operations even closer than we did at 64-bit to be able to track numerical errors. The larger plan call for full scientific simulation that should build confidence further than the narrow translate test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love floating point arithmetic...
@@ -16,7 +17,7 @@ def calc_damp(damp_c: Quantity, da_min: Float, nord: Quantity) -> Quantity: | |||
"current implementation requires damp_c and nord to have " | |||
"identical data shape and dims" | |||
) | |||
data = (damp_c.data * da_min) ** (nord.data + 1) | |||
data = np.power((damp_c.data * da_min), (nord.data + 1), dtype=Float) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this get stencilized at some point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could. It lives in the init so it probably why. But it's flimsy because we rely on the transparency of numpy & cupy to do CPU/GPU, something that will bite us sooner or later
The Fortran calculate the negative flux by inverting the
q
and offset-edq
in the gradient calculation, while the original port ofpyFV3
was using a negative u/v and the same gradient. This works at f64 but introduces errors at f32 for very small values advected.Clean up of a few differences with Fortran that don't necessarily fail now but could with other data.
E.g.:
np.power
to insure proper type casting of power opChecklist: