Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposed alternative to WorkerThread #1014

Open
modersohn opened this issue Dec 30, 2020 · 3 comments
Open

Proposed alternative to WorkerThread #1014

modersohn opened this issue Dec 30, 2020 · 3 comments
Labels
Open for Discussion There are several possibilites to address the issue and anyone is invited for comments.

Comments

@modersohn
Copy link

modersohn commented Dec 30, 2020

After we found out in #1001 that essentially, we see no way for proper synchronization of WorkerThread, here's an idea of how validation of the positional cache (i.e. what DoValidateCache() currently does) could be done in the main thread without blocking it too much and therefore get rid of WorkerThread and it's problems completely:

The general idea is to validate the cache in smaller chunks and not in one go. Each chunk could be started by posting a window message and when the message is handled, a new message for the next chunk is posted again, but only if validation has not been cancelled.

That way the validation would not be blocking, so the tree does feel responsive. It avoids the fundamental problem of a thread walking the nodes even though the tree node structure is not thread-safe. A possible disadvantage is the necessity to have a window handle for posting the window messages, but as this originally was also done for WorkerThread, this doesn't seem like a big problem.

This change could also have the potential to improve the following things:

  1. Partial usage of the cache - until now it's either complete or not used.
  2. Currently, if the cache is not ready, the code calculates the positions without the cache - but fails to update the cache with the information it already calculated! With the changed design it could directly and synchronously fill up the cache up to the coordinate it needs right now. That would almost always be faster as starting from scratch itself and then not keeping the results in the cache.
  3. If this is working out nicely, the cache could ideally be always used, which would have the benefit that the tree doesn't behave differently if a certain threshold number of nodes is surpassed.

This is definitely a non-trivial change and quite a bit work. Ideally before changing anything, a set of tests should be developed that the new implementation can be checked against.

@joachimmarder joachimmarder added the Open for Discussion There are several possibilites to address the issue and anyone is invited for comments. label Dec 30, 2020
@luebbe
Copy link
Contributor

luebbe commented Dec 30, 2020

Couldn't a threaded queue be used instead of posting windows messages or would this destroy backwards compatibility?

@modersohn
Copy link
Author

Couldn't a threaded queue be used instead of posting windows messages or would this destroy backwards compatibility?

IMO there are 2 clear goals that need to be fulfilled:

  1. Avoid threads completely because the tree node structure is not thread-safe and I can see no chance that this will change in the foreseeable future.
  2. Make the time-consuming calculation of the positional cache asynchronous so that the tree feels faster. The only way (I'm aware of) that can asynchronously carry on work - in the main thread - is posting window messages. In your scenario, who would empty the threaded queue and when? The only way I see would involve TThread.Synchronize again and that's exactly the hell we're trying to escape from.

@pyscripter
Copy link
Contributor

See my althernative in #1001

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Open for Discussion There are several possibilites to address the issue and anyone is invited for comments.
Projects
None yet
Development

No branches or pull requests

4 participants