[LABS BUILD PERF] Blog post source serialization is executed at >= O(n^2) repetitions during build #822
Labels
area: react
labs 🔭
Items related to the Labs website
priority: high 🌋
High-priority issue
type: bug 🐛
Something isn't working
What site is this for?
Quansight Consulting, Quansight Labs
Expected behavior
Each blog post would ideally only be serialized a single time per build (total cost
O(n)
), and all information from the serialization that would be needed elsewhere in the build (metadata, etc.) would be stored, cached, or otherwise retained so that we don't have to re-do the serialization over and over.Actual behavior
Each post's source is serialized many, many times during a site build. I discovered this while trying to chase down some mysterious
[Code Hike warning] C is not a valid language
warnings as part of #815.Nx is frustratingly opaque when it comes to reporting which file it's working on at a given time. I couldn't find any settings or CLI flags to show the page under current build. To try to give insight, I added a
console.log()
line to theserializePost()
function (see d6ed91a).This logging call gets executed hundreds of times.
I believe this is happening due to some combination of (i) the category filter on the main
/blog
page, and especially (ii) the 'see related posts' component at the bottom the page for each blog post.I believe (ii) is the more likely culprit because in order to find related posts, we have to know the categories of the current post and of all other posts. Thus, I speculate that, as each blog page is being rendered:
We are already seeing 4min+ Labs site builds. Given the big-O complexity of what appears to be going on (at least
O(n^2)
, if I have it right), this is going to become a logistical problem relatively soon.How to Reproduce the problem?
labs-post-overrendering
branchnpx nx run labs:build:production
Anything else?
This might also be relevant to the Consulting site, not sure. But, given that the GitHub-based blog machinery there was patterned after the Labs machinery, it seems likely it's a problem there too.
Attn @gabalafou
The text was updated successfully, but these errors were encountered: