-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce allocations in list scroll updates #4398
Conversation
…ing of updateList
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Nice work. I just have a comment (not necessarily requesting changes) inline :)
widget/list.go
Outdated
// invariant: visible is in ascending order of IDs | ||
func (l *listLayout) searchVisible(visible []itemAndID, id ListItemID) (*listItem, bool) { | ||
// binary search | ||
low := 0 | ||
high := len(visible) - 1 | ||
for low <= high { | ||
mid := (low + high) / 2 | ||
if visible[mid].id == id { | ||
return visible[mid].item, true | ||
} | ||
if visible[mid].id > id { | ||
high = mid - 1 | ||
} else { | ||
low = mid + 1 | ||
} | ||
} | ||
return nil, false | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if it would be cleaner to just use sort.Search()
here? Might be slightly slower but does it matter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah, good call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
Oh, I think we have a race condition - and it existed before, too. It seems like there is nothing preventing a user-code call to list.RefreshItem running concurrently with the layout logic that is computing what items are visible. I think just grabbing the render lock for the critical section in RefreshItem where we search the visible slice should fix it. Since that slice is only being written to while the render lock is held. We could even make it a rwmutex or introduce a new RWmutex to protect specifically the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Just wondering if we perhaps should add a benchmark for the list widget? Makes it easier to verify performance improvements and so on.
I actually think there might be another race condition here unless updateList always runs in the same goroutine. Because we have the second half of the function that doesn't lock (because it calls user code per list item) iterating over the visible slices which may be updated concurrently if another updateList has been invoked. I guess we might have to go back to using private slices for visible/wasVisible, and introduce a new sync.Pool to reuse allocated slice memory... unless we can think of a way to enforce only one updateList will ever run at a time |
The |
OK looking again it's definitely not true since it's invoked (through offsetUpdated) via public APIs like ScrollToItem that can be called from any goroutine. So we either need a way to post a re-render message to the rendering goroutine, or come up with new locking or to create ephemeral |
Any ideas @andydotxyz on what is the right way forward? I think there are at least three options:
|
I don't have any specific thoughts sorry. Is that the way forward or should we think more? |
I implemented bullet point 2 from my last comment - creating local slices of visible and wasVisible that can safely be read in the non-locked portion of updateList, and reusing allocated slice memory with a new sync.Pool added to the listLayout. |
Cool. Well done. Will have a look in a day or two 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this performance
Description:
Previously, we were allocating at least 3 new slices and one new map on every call to updateList.
This PR re-uses the same allocated slice capacity to place the objects that will be drawn each update, and switches to using slices with reusable capacity instead of maps to keep track of what's visible. Since we only expect max 50 or so list rows to ever be visible - and also we can use binary search since we always insert things in ascending row order - this should be even faster than maps.
Checklist: