Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[F32 Update] Fix for DelnFlux & DlenFluxNoSG #33

Open
wants to merge 2 commits into
base: develop
Choose a base branch
from

Conversation

FlorianDeconinck
Copy link
Collaborator

@FlorianDeconinck FlorianDeconinck commented Dec 2, 2024

The Fortran calculate the negative flux by inverting the q and offset-ed q in the gradient calculation, while the original port of pyFV3 was using a negative u/v and the same gradient. This works at f64 but introduces errors at f32 for very small values advected.

Clean up of a few differences with Fortran that don't necessarily fail now but could with other data.

E.g.:

  • Replicate Fortran subtraction to reduce f32 errors
  • Use np.power to insure proper type casting of power op
  • Verbose the half damp calculation

Checklist:

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • Any dependent changes have been merged and published in downstream modules
  • New check tests, if applicable, are included
  • Targeted model if this changed was triggered by a model need/shortcoming

Use `np.power` to insure proper type casting of power op
Verbose the half damp calculation
@FlorianDeconinck
Copy link
Collaborator Author

Though this work comes from the push to match GEOS @ f32 capacities, this should pass the f64 data set from FV3GFS and therefore be mergeable as-is.

@lharris4
Copy link

lharris4 commented Dec 3, 2024

Hi, @FlorianDeconinck Is this primarily to address round-off global mass errors?

@FlorianDeconinck
Copy link
Collaborator Author

This is to remap the calculation to Fortran so we narrow the difference at 32-bit.

Though the calculation is mathematically similar, at 32-bit the negative mult of del6_u means we lose one bit precision out - and on small gradient mult like we have after those stack up to bigger diff in D_SW

@FlorianDeconinck
Copy link
Collaborator Author

As always we have to follow the Fortran numerically up until we are confident our dynamics have reached feature parity so we can track any porting mistakes. With the introduction of the 32-bit float, we need to keep operations even closer than we did at 64-bit to be able to track numerical errors.

The larger plan call for full scientific simulation that should build confidence further than the narrow translate test

Copy link
Collaborator

@oelbert oelbert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love floating point arithmetic...

@@ -16,7 +17,7 @@ def calc_damp(damp_c: Quantity, da_min: Float, nord: Quantity) -> Quantity:
"current implementation requires damp_c and nord to have "
"identical data shape and dims"
)
data = (damp_c.data * da_min) ** (nord.data + 1)
data = np.power((damp_c.data * da_min), (nord.data + 1), dtype=Float)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this get stencilized at some point?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could. It lives in the init so it probably why. But it's flimsy because we rely on the transparency of numpy & cupy to do CPU/GPU, something that will bite us sooner or later

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants