Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update pynucastro networks #1287

Merged
merged 13 commits into from
Sep 27, 2023

Conversation

yut23
Copy link
Collaborator

@yut23 yut23 commented Jul 21, 2023

Regenerated from the current pynucastro main branch: pynucastro/pynucastro@556584d.

Also add a python script to do this with a single command.

Also add a python script to do this with a single command.
@yut23 yut23 marked this pull request as draft July 21, 2023 22:36
@yut23
Copy link
Collaborator Author

yut23 commented Jul 21, 2023

Going to wait for pynucastro/pynucastro#580 to be merged

zingale pushed a commit that referenced this pull request Jul 22, 2023
Along with #1287, this resolves all the remaining warnings I could find with burn_cell and test_rhs.
@yut23 yut23 marked this pull request as ready for review July 22, 2023 17:16
@zingale
Copy link
Member

zingale commented Jul 22, 2023

do we want to wait until we do the PR that changes the interpolation for tables to work on logs?

@yut23
Copy link
Collaborator Author

yut23 commented Jul 22, 2023

Not sure. What do the tests look like now?

@zingale
Copy link
Member

zingale commented Jul 22, 2023

I'm not sure I understand.

We haven't implemented that yet. So once we do implement the proper interpolation, we'd want to regenerate these again.

@yut23
Copy link
Collaborator Author

yut23 commented Jul 22, 2023

Oh, I see. I was wondering if that would give bigger diffs on the regression tests. I think the Suzuki table updates are the only thing that would change the results in this PR right now.

@zhichen3
Copy link
Collaborator

I was trying to build subchandra with CUDA on perlmutter with updated network, but it seems like it's complaining that std::upper_bound doesn't work with cuda.

error: calling a __host__ function("T1  ::std::upper_bound<const double *, double> (T1, T1, const T2 &)") from a __host__ __device__ function("part_fun::interpolate_pf<(int)72> ") is not allowed

I also tried on groot, getting the same error, but I do see the cuda test passed here...

@yut23
Copy link
Collaborator Author

yut23 commented Jul 24, 2023

That's unfortunate, we can probably just write a manual binary search instead (or maybe reuse vector_index_lu() from the tabular rates). My guess for why the test passes is that interpolate_pf() is a templated function and so it doesn't get compiled unless it's used somewhere. The CUDA test doesn't build with any networks that use partition functions; maybe we should add that?

@zhichen3
Copy link
Collaborator

we do compile test_react with subch_simple in the cuda test though?

@yut23
Copy link
Collaborator Author

yut23 commented Jul 24, 2023

Yes, but DO_DERIVED_RATES is false.

void get_partition_function(const int inuc, [[maybe_unused]] const tf_t& tfactors,
Real& pf, Real& dpf_dT) {
// inuc is the 1-based index for the species
switch (inuc) {
default:
pf = 1.0_rt;
dpf_dT = 0.0_rt;
}
}

@zhichen3
Copy link
Collaborator

ah okay. We could build withase network, it has DO_DERIVED_RATES=True.

@yut23
Copy link
Collaborator Author

yut23 commented Jul 24, 2023

Yeah, that would be good since it exercises approximate rates too.

@zingale
Copy link
Member

zingale commented Sep 27, 2023

with the release of pynucastro 2.1.0, I think we are ready for this now. We should update to get the latest table changes.

@zingale
Copy link
Member

zingale commented Sep 27, 2023

I think we need to update some benchmarks now

@zingale zingale merged commit 2261118 into AMReX-Astro:development Sep 27, 2023
22 checks passed
@yut23 yut23 mentioned this pull request Oct 2, 2023
@yut23 yut23 deleted the update_pynucastro_nets branch February 8, 2024 18:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants