Can I overallocate CPaths for reuse or for use in builders? #931
Unanswered
schellingerhout
asked this question in
Q&A
Replies: 1 comment 1 reply
-
It seems like you are optimizing a potential bottleneck without having a measurement. For complex algorithms with millions of offsets and boolean operations, I've never noticed that memory allocation is the bottleneck. ClipperLib will do a lot more internal allocations then will ever happen here. Also keep in my resizing result is incredibly efficient because the stored Path(s) will be moved, not copied. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I just need a second set of eyes on
ConvertCPathsToPathsT
to verify that I am not going down a path (I made a pun) that will get me into trouble.Since the internal code used
Paths<T>
all that is needed in conversion fromCPaths
is the number of paths and then the number of vertices of each path. This means that "A: Array size (as distinct from the size in memory)" is not used for input, which is fantastic for me. I can then over allocate and create a builder without separate memory tracking. I can reuse a block of memory for paths.Why is this important to know? Memory allocation is one of the slowest operations that you can do in code. Any time you can use a pool or pre-allocate, or over allocate and reuse, you gain time. Why not lie about the size in the "A" field?, because then I have to write a separate memory tracking or box this into another structure that holds the "real" count. If the array holds its real size in "A", but allows that to be independent of the used size its exactly what I need to keep things simple
This invariant should be acceptable
A >= CPaths_Header_Size + CPath_Header_Size * NumberofPaths + NumberOfVertices * Dimensionality
,where CPaths_Header_Size and CPath_Header_Size is two (A, C) and (N, 0) respectively.
Beta Was this translation helpful? Give feedback.
All reactions