-
Notifications
You must be signed in to change notification settings - Fork 624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pyrolysis Simulation Time #13187
Comments
I am looking into this a bit more. There were some syntax changes to the solid phase pyrolysis routine between versions 6.7.7 and 6.7.8. I am wondering if the changes in syntax are resulting in the difference in runtime since what is being requested by the solid phase solver is not the same. I'll let you know what I find. |
We have also made some recent changes in how small the timestep can get as a layer THICKNESS burns away to reduce numerical instablities. |
How recent of change are you thinking? Here are the results of a timing study I ran with the input file above: 6.7.4 - 3.5 s The jump in between 6.7.7 and 6.7.8 could be due to the syntax change, but that is only a part of the difference. |
what I was thinking of may have been since 6.9.1. |
I recompiled 6.7.1 and compared with latest source on spark. 6.7.1 took 8.236 s and latest took 10.734 s. Nothing obvious to me from the .out files as to why. |
For the latest source vs. Jonathan's observation of the release version of 6.9.1 could be changes made since 6.9.1 to timestepping when THICKNESS changes. |
I added an output quantity of SUBSTEPS both DEVC and BNDF and only see values of 1 for the latest. So, doesn't that mean we are taking DT_BC=1 with 1 substep for the solid phase? |
When I compile the latest master on windows the timing is similar to the pre-compiled 6.9.1 timing for both at ~5.8 s. The 8.2 s on spark verse 5.8 s on my local machine is odd to me. Were you using the build openmp activated? What's the clock speed per core on spark? |
Not using openmp. Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz |
here is info on one of the cores on spark001 . clock speed should be the
same for other nodes/cores
processor : 63
vendor_id : GenuineIntel
cpu family : 6
model : 106
model name : Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz
…On Wed, Jul 17, 2024 at 4:09 PM johodges ***@***.***> wrote:
When I compile the latest master on windows the timing is similar to the
pre-compiled 6.9.1 timing for both at ~5.8 s. The 8.2 s on spark verse 5.8
s on my local machine is odd to me. Were you using the build openmp
activated? What's the clock speed per core on spark?
—
Reply to this email directly, view it on GitHub
<#13187 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AC6UCRXJIDYKGRDT4RV5JQ3ZM3FOXAVCNFSM6AAAAABLA237X6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZUGE3DQOBUHA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
Glenn Forney
|
My laptop is using an Intel Core i7-13800H Processor. The website says max turbo frequency of 5.2 GHz, but task manager shows FDS capping out at ~4.0 GHz. |
Hi Jon, I have a laptop with performance and efficiency cores, also Intel. I noted that using the openmp target the calculation was quite slower.
Somehow it was not maxing out the clock speed.
…________________________________
From: johodges ***@***.***>
Sent: Wednesday, July 17, 2024 04:21 PM
To: firemodels/fds ***@***.***>
Cc: Subscribed ***@***.***>
Subject: Re: [firemodels/fds] Pyrolysis Simulation Time (Issue #13187)
My laptop is using an Intel Core i7-13800H Processor. The website says max turbo frequency of 5.2 GHz, but task manager shows FDS capping out at ~4.0 GHz.
—
Reply to this email directly, view it on GitHub<#13187 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ABY23VO54GR2LKLEHIDE64LZM3G5HAVCNFSM6AAAAABLA237X6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZUGIYDIMBQGU>.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
I'll take a look at this today. I want to look at the |
These are the results I get for the file above on spark:
It appears that there is a significant change between 6.7.5 and 6.7.6. |
It appears that the increased time in the pyrolysis routine dates back to this
|
This was to reduce instabilities when the wall is renoded. Because we store temperatures at the node center, when we renode, we have to recompute the surface temperature to preserve the correct assumed relationship at boundary. We were getting large swings in the surface temperature which impacted heat transfer, pool evaporation, etc. |
You can up the RENODE_DELTA_T on SURF to limit this effect. It defaults to 2 K. |
Thanks for looking into it. I am not sure renoding is the issue in this case. The case I uploaded here does not heat very quickly with the surface only heating ~2C over the full simulation of 1800 seconds. I re-ran the case with RENODE_DELTA_T=100 and it did not change the computational time significantly. (ran the comparison with fds 6.9.1 release) |
Try adding a PROF for the temperature profile and look to see if there is anything odd with noding. |
I performed a git bisect using the simple test case at the top. Double check what I did. Run this case with the version that was identified by the git bisect and then run with the version just prior to that . If you see a big change in time, and it is not the renoding, then there might be something else in that commit causing the increased time. |
I ran the case with the two bounding commits and did see a difference in computational time. df3b91c - 4.1 seconds The re-noding commit had the same time with RENODE_DELTA_T=100 and without setting it. I did not see anything odd in the gridding. The attached script will generate the attached video of the time-resolved PROF. https://github.com/user-attachments/assets/7791f6b9-7824-44b8-a1d5-691ee19a6e92 Edit: Sorry for the inconsistency in times between this and my earlier timings benchmark post. I had to run this benchmark through my linux boot since my windows machine is busy on another model right now. |
Interestingly, most of the computational time is coming from the REMESH_CHECK_IF loop when REMESH_CHECK is True (i.e., the temperature does not exceed RENODE_DELTA_T). If I force the loop to always go through the temperature check (setting RENODE_DELTA_T=1e-10), the computational time in df3b91c drops to 2.2 seconds compared to 4.1 seconds when RENODE_DELTA_T is not set. The part of the loop that RENODE_DELTA_T bypasses checks if the number of cells in each layer should be decreased if the thickness of the SURF decreased by more than TWO_EPSILON_EB. Edit: Original comment stated REMESH_CHECK is False, but it should have said True. |
I do not understand the |
Gave this a little thought. Right now we only store from time step to time step the current wall noding. So if we picked something larger than TWO_EPSILON_EB for the remseh check at line 2291, we would undoubtedly find ourselves in the situation where all the nodes for a wall cell never meet the check for the larger number on any given time step but over many time steps do. If we stored both the current wall noding and the noding as of the last remesh (would only need to allocate this when a cell enters the GET_WALL_NODE_WEIGHTS block at line 1965), then we could set a remesh check based on a wall node having a net decrease over one or more timesteps that is some fraction of the node size since the last renode. |
If we limit the allocation to |
Update. Still working on this. While cleaning up the remeshing logic found some other things that might cause future problems. I think I have a better and somewhat simpler set of logic that should reduce the amount of times we do a full remesh. Doing some testing now. Remains to be seen if after doing this we will still need the RENODE_DELTA_T parameter or not. |
Seems to me that there are always steep gradients in temperature near the surface that will trigger that criterion. But I don't know if these Delta T's are really a problem or not. |
I cleaned up the input file - fixing units for SPECIFIC_HEAT and removing TMP_BACK which overrides BACKING='INSULATED'. With this version much less time is spent in wall; however, you still see the time increase from 6.7.5 to 6.7.6 but it comes down again in 6.9.1. Thew new renoding scheme (plus changes to PYROLYSIS) is about 10 % cheaper than 6.7.5. Timing results below. It is interesting that in 6.9.1 we see time in PART (about 3 %) and this case has no PART inputs or DEVC that invoke particles. Plot shows wall thickness. The new stuff matches 6.9.1 but those differ from 6.7.5 and 6.7.6; however, we have spent a lot of time over the last couple of years focusing on energy and mass conservation for pyrolysis so this isn't surprising. For renoding walls did a few things:
It passed firebot on pele, but I will wait for a push until after our dev meeting. <style> </style>
|
Describe the bug
I was revisiting our internal kinetics optimization routine to integrate optimization of effective battery properties using a similar approach. I noticed the run time of the solid phase only simulations are higher in the latest versions of FDS in the cases used in the optimization. We had a similar discussion on this a few years ago with a difference between FDS 6.2 and 6.7 in #6848. The change made in 6.7.1 in the calculation of temperature dependent material properties brought the runtime down to levels in line with version 6.2.
The attached case runs ~50% slower on the latest FDS than on 6.7.1. On my local machine, the timing for 6.7.1 is ~4 seconds and for the release version of 6.9.1 is ~6 seconds. In my head I was expecting 6.9.1 to run a bit faster since we removed the overhead associated with openmp in the version I was benchmarking. In terms of solid phase only optimization, it may not be too big of a deal, but I am wondering if the difference is also having an impact in HT3D calculations where we are using a more refined grid in the solid phase.
Desktop (please complete the following information):
Additional context
coneTest2_s6.txt
The text was updated successfully, but these errors were encountered: