-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Signals should be able to efficiently return to an old value #256
Comments
I'm not sure if versions is the right way to see it, considering a Consider this scenario: const a = new Signal.State(1); //version 1
const b = new Signal.State(2); //version 1
const c = new Signal.State(3); //version 1
const abc = new Signal.Computed(() => a + b + c);
assert(abc.get() === 6); //combined versions 1,1,1
const watcher = new Signal.subtle.Watcher(() => {});
watcher.watch(abc);
a.set(11); //version 2
c.set(33); //version 2
a.set(1); //back to version 1
assert(abc.get() === 36); //combined versions 1,1,2 - must recompute Even with something as simple as 3 sources, memoizing previous versions would become a mess. Now imagine 10 sources. I think the reasonable way to have your original scenario ( Something like this: const abc = new Signal.Computed(() => a + b + c);
watcher.watch(abc);
assert(abc.get() === 6);
/*
sources: [
{ signal: a, lastKnownValue: 1 },
{ signal: b, lastKnownValue: 2 },
{ signal: c, lastKnownValue: 3 },
]
*/
a.set(11);
c.set(33);
a.set(1);
c.set(3);
/*
sources: [
{ signal: a, lastKnownValue: 1 }, // same
{ signal: b, lastKnownValue: 2 }, // same
{ signal: c, lastKnownValue: 3 }, // same
// = NOT RECOMPUTE
]
*/
assert(abc.get() === 6);
a.set(11);
/*
sources: [
{ signal: a, lastKnownValue: 1 }, // different
{ signal: b, lastKnownValue: 2 }, // same
{ signal: c, lastKnownValue: 3 }, // same
// = RECOMPUTE
]
*/
assert(abc.get() === 16); |
So my org's legacy implementation of Signals does that kind of thing, where it stores the lastValue on the edges of the dependency graph. It is a design decision we regret. Storing previous values on the node has a few advantages:
The last seen version is already stored in the polyfill implementation, downstream nodes don't need to change anything to take advantage of this. |
I like using the internal cache because then it is behavior that you can specifically opt into in cases where it is actually helpful, and has almost no cost on the system as a whole. Additionally, storing the values on the edges can lead to significant memory leaks. Our legacy implementation had to implement a Thanks for the examples in #197, I am going to edit the original post to use those, since I think they are clearer. |
What do you mean by your Here are some advantages of storing values on the edges (rather than having a cache on the node):
Regarding memory, it is true that keeping a lot of references to dirty signals can lead to having a lot of previous values kept in memory, but it is already the case currently, as mentioned in #254. I would not call that a memory "leak". That memory is not lost, it is linked to the dirty signals, it is used when updating a signal and freed when no longer needed (when the signal is up-to-date, or when it is garbage collected). As suggested in #254, maybe we could also have an explicit |
We have some Puppeteer tests that load our application, and then puts the application in a state where we expect there to be 100s of instances of class So we have a A downside of storing the values on the edges is that you have to remember to opt in to the
Since the signal knows which valueVersions consumers last used, it is trivial for it to drop cached values that correlate to valueVersions that no consumers are depending on.
No, they correspond to the number of dependency edges there are, since each edge holds that data in memory. Obviously if they are each holding a reference to the same object then the only duplicated data is the reference.
Same as the last point: even when you aren't holding old values, you are still holding many copies of the most recent value. Our application has ~1 million dependency edges, so each extra byte on each dependency edge results in 1MB of additional memory usage. |
I prefer having by default "the whole system is guarantied not to recompute dependencies when it is not needed" and then I can opt-out when I know I have to (because of memory constraints), rather than having to opt-in to what is considered by many as the normal behavior of a computed.
You can easily have a counter of the maximum number of consumers that are using a valueVersion, which allows to get rid of some values when it reaches 0, but it may not be the exact number if some consumers used some old version and have been garbage collected. Or are you keeping a reference to all consumers with a |
There is a fixed overhead of another ~8 bytes per edge that I can't think of a reasonable way to opt out of.
The "reference to the consumer" is the "edge" of the dependency graph, so you need to store that one way or another for the signal to be reactive, and you would definitely need to store it to allow for storing lastValues on the edges. |
In certain cases a Producer changes to a new value, but then later resets to an older value. In the current spec / implementation all downstream Consumers must recompute, even if the value of all signals they are consuming are equal to the previous time they were consumed.
Imagine this scenario (example taken from #197):
The best way for a Producer to be able to return an old value and have consumers "just work" would be to allow it to return to an old value version. In order to support the ability to return to an old version this check must be removed from the implementation:
It can be improved by allowing the Producer to be asked if it can possibly return the
seenVersion
:Then the Producer implementation will be allowed to store a cache of
N
previous values, mapping to the previous version numbers.If the
seenVersion
is in that map, thencanMaybeProduceVersion
will return true, forcing a recalculate.For a
State
, when a new value is being set; For aComputed
, when a new value is calculated:The new value will be compared against cached values using the
equals
callback. The first entry that returns positively will be used, and the valueversion
will be reset to that number.I think this should be an option during the construction of
State
andComputed
likeIf #255 is implemented then a value will be considered "dropped" when it is evicted from the cache. A newly calculated value will be immediately dropped if a cached value is going to be used instead.
The text was updated successfully, but these errors were encountered: