You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be very convenient to be able to execute jobs on-demand (for testing, etc.). This is especially true, if the job is not just a simple command, but requires setting the working directory, some environment variables, and so on - executing such a job via a simple reference to its name on the deck-chores container would be a lot easier than building the correct "docker exec" (or its swarm equivalent) command.
Unfortunately, looking at the code of deck-chores, I don't see a good place to integrate such functionality. Using signals won't work, since Python doesn't pass any arguments to the signal handler.
The only spot I could think of (without some extra development of RPC support in deck-chores) is the events listener: it can be abused by looking at, e.g., exec events on the deck-chores container, and parsing out the name of the job to trigger from the command (which will obviously fail). Convoluted, I know.
If you think it's a good feature, I'm open to discussion as to how to implement it.
The text was updated successfully, but these errors were encountered:
ciao, thanks for your proposal. i think there was a similar request once. i certainly don't have the time to implement such feature, other concerns for this project are of higher priority to me as well.
what about re-designing the cli-interface to introduce the subcommands daemon and run? with no provided subcommand, it would invoke the daemon command and behave as it does currently.
i'm assuming that this doens't require a larger refactoring and would allow to re-use most required code. and it isn't hackish as interface.
for the run command, the interface could be deck-chores run CONTAINER JOB....
one thing that users may bring up once this feature is available, is that they want to define jobs without a time-based trigger. this shouldn't be a problem.
some people may ask to expose configuration options via cli, but i'd be hesitant to that.
i don't care whether you open a pull request with a wip state to get early feedback or a completely implemented solution. thorough reviews are guaranteed!
It would be very convenient to be able to execute jobs on-demand (for testing, etc.). This is especially true, if the job is not just a simple command, but requires setting the working directory, some environment variables, and so on - executing such a job via a simple reference to its name on the deck-chores container would be a lot easier than building the correct "docker exec" (or its swarm equivalent) command.
Unfortunately, looking at the code of deck-chores, I don't see a good place to integrate such functionality. Using signals won't work, since Python doesn't pass any arguments to the signal handler.
The only spot I could think of (without some extra development of RPC support in deck-chores) is the events listener: it can be abused by looking at, e.g., exec events on the deck-chores container, and parsing out the name of the job to trigger from the command (which will obviously fail). Convoluted, I know.
If you think it's a good feature, I'm open to discussion as to how to implement it.
The text was updated successfully, but these errors were encountered: