-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run custom frames #11
Comments
1.change the testset_root at configs/config_test_w_sgm.py
|
If you want to add frames to a complete video, I guess the code needs to be changed a lot |
Thanks! But please don't rename the "new frame" folders to pre-computed SGM flows. Otherwise, the wrong initial flow will definitely mislead the network... The correct way to is to generate new SGM flows for your own data using code in models/sgm_model. We will try to write a guidance, as soon as we can... |
The script (with specific hyper-parameters) to generate sgm flow for custom data is in great request. |
The test code I used may read the pre-computed sgm flow, but I replaced the optical flow in pre_calc_sgm_flow folder with the optical flow generated by RAFT, which did not affect the final export result |
I have a question that is semi-related to this issue, please tell me if I should start a new one. If I want to run AnimeInterp on a custom cartoon, should I generate optical flow for it beforehand? Does it make sense to generate it ad-hoc or would it make inference much more inefficient? Is there a way to run this without precomputed optical flow at all? |
According to my test, no matter what kind of optical flow is input, even if it is empty optical flow or even error optical flow, the impact on subsequent optical flow is not great, and the impact on the generated frame is smaller. |
|
It's interesting and I just have seen it. |
Thanks for your comments! Would you please share more details on this experiment? Besides, one thing I wish to ask is that did you replace the .npy files in pre_calc_sgm_flows, or replace the .jpg visualizations only? |
Assuming u are in the right direction, my guess is as follows. Firstly, note that the training contains 3 steps:
Empty/zeros flow also works: Impact on the generated frame is smaller: Hope that will be some help for u. |
replace the .npy files |
I have been communicating extensively with @YiWeiHuang-stack |
Hi. Thanks for your interest. As to the effect of the SGM module, it could be referred the ablation study part of our paper. Generally, the SGM module can make improvements on the interpolated results (0.14dB on average), especially for the part of cases in large motion. The SSIM values are indeed similar, which is the same as reported in the paper. Also, it may not be suitable to measure the quality of flows used in video interpolation by seeing the appearance of visualization (e.g. the boundary) since the flow network will be tuned to fit the interpolation task better. This point could be referred to a very good paper "Video Enhancement with Task-Oriented Flow" by Xue et al. |
Another point on whether the pre-computed SGM step can be ignored: The implementation of SGM is based on color piece segmentation and matching in "for loops" on CPUs. If the test frames contain too many color pieces, the SGM module would be slow. So we split the SGM as a pre-calculated part. On the consideration of time efficiency on generating a long piece of video, one could optionally modify this model into w/o. SGM version. But the results of a whole model should be reported for formal reports (e.g. in a paper). |
I got the code running with the provided dataset, but I would prefer to test with custom frames.
Is there any way to achieve this on the current code, or would it need to be implemented?
The text was updated successfully, but these errors were encountered: