-
-
Notifications
You must be signed in to change notification settings - Fork 698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
watchmedo shell-command: drop events during execution of command #136
Comments
Please don't change it;
|
@ryandesign The command-line help states which behavior the $ watchmedo shell-command -h
...
-w, --wait wait for process to finish to avoid multiple
simultaneous instances |
Can you explain again what you wanted or what the bug is? |
As described in the first message. EXAMPLE: |
Still don't get it. It's execute exactly once. Just tested by triggering a batch of events:
As expected. |
This is an issue for us as well. We're using watchdog to watch our source directory to trigger an automatic build of the software. When changing git branches, events are triggered for each file and directory change. Ideally, all of those events would trigger just one build. What we'd like is to have the functionality of the |
To accommodate both @ryandesign and @chrisconley perhaps
|
To workaround this I wrapped my testrunner command with a lockfile. From my tricks.yaml:
|
Using --command 'fab test', always runs tests twice for me... using -w doesn't make a difference. Is this a problem with fabfiles? |
Reading this issue's comments as a newcomer it seems people are talking about different functionality. @jonaldinger's suggestions seem the most sensible and make watchmedo a lot more useful. Even copying a file into a directory will trigger multiple events:
My specific use case is that I want to uncompress an archive when it's been copied, but watchdog will spam the command for every event. I only would like to uncompress the archive when it's finished copying. While I can work around it, @jonaldinger's --event-delay would make this use case trivial. |
+1, --event-delay would be great! Currently watchdog with --wait kicks off 6 builds in a row when I save one file in vim, which is particularly overwhelming when there's a build error. I would have thought --event-delay implies --max-wait-queue=1 though, so I won't know that both flags would be necessary. If you're going to be dropping events at all, it doesn't seem like there's a use case for keeping multiple events that accumulate during a command's execution. |
@gfxmonk I've added a It does cause the command to run after The modifications are in Pull #231 or in nuket/watchdog@11c64d6. |
Merged as |
Excellent. Thanks for the quick turnaround. On 4/10/14, 4:33 PM, tamland wrote:
|
I love |
Great little utility, thanks. The watchmedo feature I was hoping for (and I think others too from rummaging around in feature requests), may be different than --drop, which is to say, if there are a batch of changes happening, I don't want the shell-command to be run until after the changes have all happened, and only to happen once, as they are all the result of the same cascade of events. In other words I would like the system to wait for X milliseconds after each event to see if there's other events to be batched together with it, and then finally trigger the command after the first X milliseconds monitored without changes. You might call it --idletrigger --drop seems to trigger the shell command immediately when the first change happens, and then ignore subsequent changes, which is quite a different feature set, highly likely to trigger race conditions, and a bit surprising that it's what people want, assuming they're using it for the kind of cascade commands I'm working with - recursively monitoring whole directories where a recompile has extensive knock-on changes which influence a lot of files, and where those knock-on effects may take quite a few milliseconds to fully propagate. It's not just the first change which should be noticed, or the ones which have raced to be completed in time, and the next pipeline should only be triggered after all the changes are complete By all means tell me if my testing is wrong and I've misunderstood --drop. Also happy to refile a new issue if others subscribed to this this one really do want --drop and don't want --idletrigger Also I may have missed the full help or manual file as I can't seem to find a complete list of the commands available in my copy of watchmedo, so it may be hidden in another command line flag which I haven't examined, yet.
...doesn't list the --drop flag, for example, although it seems to be honored and...
...says the utility is undocumented in man. |
@cefn That seems reasonable to me, what would make sense IMO/for me:
(doing |
@nuket I know it's been a long time, but is there any room to resubmit that PR? What @cefn described is exactly what I want.
I don't want that. I want watchmedo to start executing on the last file change. Obviously that's not possible to detect, but a smoothing of a couple milliseconds would perfectly solve the issue. This is a well-known concept in programming too, for instance, lodash has a function called |
(original title): watchmedo shell-command --wait seems to fire more often than necessary
Version: watchdog 0.6.0
When I execute the watchmedo script with "--wait --command=..." seems to be triggered to often. The command seems to be executed for each detected event according to the generated command-output. This seems to unnecessary and inefficient. Normally, I would expect the following behavior
REASON:
When the command is executed it should rebuild whatever and be responsible for all accumulated events.
The efficient behaviour could also be enabled by a new, additional command-line option.
NOTE:
If you do not use the "--wait" option you can still use the nice debug/diag feature:
to detect which filesystem events occurred.
The text was updated successfully, but these errors were encountered: