You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using custom recipes, the recipe has to be available on each machine, otherwise the service is simply ignored. Currently, users who want to use custom recipes would have to find some alternative way to make the recipe available on each machine in order for custom services to work.
Potential solution
There already is a RecipeController which has a create() method that compresses uploaded files into a .tar.gz archive and stores that on the server. This method currently isn't used, but the idea might be worth pursuing.
Say a Recipe could be attached to a Service to make it private to that user (could be a CustomRecipe model if necessary). If a service uses a custom recipe, the client uploads that along with the service. Conversely, when syncing services, the client checks if a custom recipe is attached to the service and downloads that if necessary, otherwise it falls back to the current behavior (i.e. look for a global recipe and use that).
One problem with this approach is that a custom recipe could be changed locally and then require updating to transfer the changes to other machines. This would potentially lead to having to re-upload the recipe many, many times, which could produce a lot of load on the server.
A potential solution could be using something like @dldc/rsync, which is a pure TypeScript implementation of the rsync algorithm. It produces a checksum, which can be stored in the database along with the recipe archive, and could be used by the client to check if the local version has changed. The client would have to produce a new archive of the local files for that, but since recipes are typically only a few kB in size, this would not be prohibitively expensive.
If the checksum has changed, rsync can produce a binary patch, which could even avoid having to re-upload the entire recipe in order to update it on the server, which could be important on a high-traffic instance like the official API server. Assuming that producing a tar archive is deterministic, this could work.
"Unfortunately", tar archives do include things like file permissions and ownership, which might change depending on the machine, meaning an archive of the same recipe produced on one machine running say, Windows, might be different than one of the same recipe produced on Linux. This might require some sort of normalization process to ensure that archives are always produced in the same fashion no matter where.
The text was updated successfully, but these errors were encountered:
Problem
When using custom recipes, the recipe has to be available on each machine, otherwise the service is simply ignored. Currently, users who want to use custom recipes would have to find some alternative way to make the recipe available on each machine in order for custom services to work.
Potential solution
There already is a
RecipeController
which has acreate()
method that compresses uploaded files into a.tar.gz
archive and stores that on the server. This method currently isn't used, but the idea might be worth pursuing.Say a
Recipe
could be attached to aService
to make it private to that user (could be aCustomRecipe
model if necessary). If a service uses a custom recipe, the client uploads that along with the service. Conversely, when syncing services, the client checks if a custom recipe is attached to the service and downloads that if necessary, otherwise it falls back to the current behavior (i.e. look for a global recipe and use that).One problem with this approach is that a custom recipe could be changed locally and then require updating to transfer the changes to other machines. This would potentially lead to having to re-upload the recipe many, many times, which could produce a lot of load on the server.
A potential solution could be using something like @dldc/rsync, which is a pure TypeScript implementation of the rsync algorithm. It produces a checksum, which can be stored in the database along with the recipe archive, and could be used by the client to check if the local version has changed. The client would have to produce a new archive of the local files for that, but since recipes are typically only a few kB in size, this would not be prohibitively expensive.
If the checksum has changed, rsync can produce a binary patch, which could even avoid having to re-upload the entire recipe in order to update it on the server, which could be important on a high-traffic instance like the official API server. Assuming that producing a tar archive is deterministic, this could work.
"Unfortunately", tar archives do include things like file permissions and ownership, which might change depending on the machine, meaning an archive of the same recipe produced on one machine running say, Windows, might be different than one of the same recipe produced on Linux. This might require some sort of normalization process to ensure that archives are always produced in the same fashion no matter where.
The text was updated successfully, but these errors were encountered: