Best Practice For Modelling Aggregate Grains #7130
Replies: 3 comments 2 replies
-
I've met similar scenarios before, at that time, I chose to get rid of the built-in persistence model of Orleans, means all persistence behaviors were handled by self-implemented application code, the data structure in database are controllable and I can configure indexes on data fields, so that I can generate and export report from these data. Recently I'm building another project with Orleans, so with the trauma from old days, this time I prefer to do just like what you said - use an ETL process to report data to ElasticSearch, I wish this time it can have a good ending Go back for your question - "Best Practice For Modelling Aggregate Grains":
I prefer this way, even for projects that don't use Orleans, separating report data from original business entity data is still a good pattern, it may seems completed at start, but it's very possible reduce many complexities in the future.
I'll not give my vote on this one, IMO Orleans doesn't have good advantages on reporting, compare to other tools, especially when you have different teams working on data analyzing and project developing. This answer is completely based on my experience, the "Best Practice" should mostly depend on your scenario, for example, if your application didn't require mass and complicated reporting logic, you can surely take the option one. |
Beta Was this translation helpful? Give feedback.
-
I forget where I read it, but one recommendation is to bypass orleans and use your persistence's apis (i.e. SQL queries/stored procedures/etc that your persistence supports) directly to perform aggregate operations. One idea with that is to use grain timers/reminders to execute those operations on a schedule or one-way call from another grain to create your aggregates. There are also some patterns with explanation in OrleansContrib DesignPatterns repo https://github.com/OrleansContrib/DesignPatterns |
Beta Was this translation helpful? Give feedback.
-
If event sourcing is an option, then it would be pretty easy to do a projection from the db. |
Beta Was this translation helpful? Give feedback.
-
Quite new to Orleans and still exploring its features and capabilities so please bear with me.
What is the best practice for modelling aggregate grains? I have not really seen an aggregate grain in the included samples in the Orleans source. The TwitterSentiment sample has a TweetDispatcherGrain which looks to be an aggregate grain but it seems to be too chatty as it fetches individual grains. It doesn't matter in the Twitter example but if there were thousands of grains then that would take thousands of storage/IO access too.
Another question is the Persistence model seem to store grains as a JSON/XML/Blob. If I were to have a separate application that needs to generate reports how is it normally done, what's the best practice?
Coming from traditional SQL data types, it is quite easy for reporting programs to just issue a query, but since the grains is a blob, I reckon I have to add an ETL process just so the reporting apps can work..or have the reporting app be able to decipher the blob storage - either way it adds another layer of complexity.
Or should the reporting app talk to a reporting server that access the grains, again the samples does it one grain at a time, and create the reporting model? This approach seems inefficient to me and I believe traditional SQL will beat it in performance since that is what it is for.
Again, looking for guidance here.
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions