-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overlapping cache for mutations with same fixedCacheKey
#4503
Comments
fixedCacheKey
fixedCacheKey
That seems to be exactly the behavior you "asked for" by specifying There definitely isn't an option to provide this at the hook level, and I'm not sure off the top of my head if that's feasible given how the internals are implemented atm. |
Thanks @markerikson. I completely agree to these. My experience was one cache entry at the api level, it appears. Endpoint A's mutation cache was getting populated when endpoint B's mutation was fired. The common denominator being the Perhaps I can clone the repo and try to add a test covering this case (if not already present) to see if it fails. |
I think this is still a "that is what your code has configured it to do" thing :) Internally, RTKQ keeps data in the Redux store as lookup tables where the keys are the serialized arguments: rootState: {
api: {
queries: {
["getPokemon('pikachu')"]: cacheEntry1,
["getPokemon('pikachu')"]: cacheEntry2,
["getPokemon('bulbasaur')"]: cacheEntry3,
["getTodos(undefined)"]: cacheEntry4,
},
mutations: {
["addTodo('1')"]: tempCacheEntry1,
["addTodo('2')"]: tempCacheEntry2,
// etc
}
}
} When you use I think, based on a quick code skim, that we don't attempt to differentiate that by endpoint at all. So, we don't have keys like And in that case, yes, if you specify the same |
Ahh okay, got it! I think it's just a thing of expectations not matching the design. So what is happening is that Thanks for the detailed explanation. It took me a good bit of time debugging to realize what was happening. I was seeing loading states for one mutation when it wasn't yet fired! In light of the explanation we can close this issue. I wonder if it could be added to documentation as well? Regardless, I'll make a note when explaining to my team on how to adopt it as it's something new we are trying. |
Yeah. To be honest we probably could (and should?) prefix it. Not sure why we didn't. I can't immediately think of a good reason why you would ever want separate mutations to collide like this. The actual intent of the So, I'd honestly consider this a bug. |
Yeah, that's my immediate suspicion as well. Hopefully there isn't someone out there counting on the existing collision behavior. (If there is, we could also provide an option to skip the prefix. But that could be a bit much) If this kind of collision is desired, it can be possible in other ways? One way is, a hook/selector which returns a merged view of the two mutation states. I'm happy to give a go at the fix if you think we should proceed and don't mind! |
Yeah, tell you what, go ahead and file a PR. I expect that the actual fix is just going to be something like |
Probably only here:
I can think of one edge-casey scenario where someone would actually want that: endpoints: (builder) => ({
register: builder.mutation({fixedCacheKey: "user"}),
login: builder.mutation({fixedCacheKey: "user"}),
})
function Profile() {
const { data } = useLoginMutation()
} If the user registers and is directly logged in (without also hitting the login mutation), that will also fill the data for If we ever intended that to be possible... I honestly can't remember. |
Hmm, thanks @phryneas. This is a motivating case for sharing the same cache key across mutations. Leaving the behavior as is provides flexibility for people who want to do this: even if it was never intended! Perhaps we can get away with just updating the docs to call out how The reverse, opting into the colliding cache, sounds like it may be more tricky/risky to achieve? |
My own take is that I don't like that pattern :) My first reaction is that using a mutation to read data is a bad approach in the first place and you ought to be relying on a query for that. (If anything I think we ought to do more to make it easier to populate a query cache entry from a mutation result, or something along those lines.) |
Yeah, I get you. My specific use-case is triggering the mutation on one screen, and showing the result on the next screen. Here there's no query cache to update. Before discovering In my other use-case, the endpoint has a POST method, and the API is autogenerated from Postman, and so it ends up as a mutation. Here, I'n considering overriding the API generation to treat it as a query albeit one that uses a POST method under the hood. |
Yeah, one thing that confuses people is that the HTTP |
Hey any updates on this problem? currently I am creating a post then navigating to another page. That page then has to query the backend on the data the mutation just got back. This feels wrong as I have to write a new endpoint for that specific query and |
@MarvinCorro that sounds like a different question entirely. This issue is describing using the same |
Thanks for the clarification, ill go ahead an open a new issue for that |
Background
Specifying the same
fixedCacheKey
across mutations leads to overlapping caches between the two mutations. While the fix for this is as simple as properly ensuring cache keys are not repeated across mutations, it is a bug that can easily be run into.Should
fixedCacheKey
's have this behavior, or should it result in a mutation cache specific to the endpoint it is provided for?Example
Potential Solution
Desired Outcome
The desired outcome should be that the same cache key can be used across endpoints without their caches overlapping since they are separate endpoints.
Having diagnosed and found a solution to the problem, it is no longer a pressing issue, however, it would be lovely if it worked as expected and if it aligns with the library's goals! Thanks, and happy to help resolve.
RTK Version
1.9.0
The text was updated successfully, but these errors were encountered: