Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Handheld Multi-Frame Super-Resolution #40

Open
brotherofken opened this issue May 17, 2019 · 6 comments
Open

Implement Handheld Multi-Frame Super-Resolution #40

brotherofken opened this issue May 17, 2019 · 6 comments

Comments

@brotherofken
Copy link
Contributor

Google researches published paper with details of implementation of their superresolution: https://arxiv.org/abs/1905.03277

It seems not trivial but implementable. Algorithm introduces new approach to merge and produces demosaiced image, so requires large changes in pipeline.

Development might be split into stages:

  1. Alignment refinement for sub-pixel accuracy using Lukas-Kanade optical flow iterations.
  2. Merge without considering robustness term
  3. Add robustness estimation.
@JVision
Copy link

JVision commented Aug 12, 2019

Hi Brotherofken, I am working on an open source implementation https://github.com/JVision/Handheld-Multi-Frame-Super-Resolution of the paper.

would appreciate if anyone could join in with me.
At very early stage though, please check it out at. @brotherofken

@chencuber
Copy link

Google researches published paper with details of implementation of their superresolution: https://arxiv.org/abs/1905.03277

It seems not trivial but implementable. Algorithm introduces new approach to merge and produces demosaiced image, so requires large changes in pipeline.

Development might be split into stages:

  1. Alignment refinement for sub-pixel accuracy using Lukas-Kanade optical flow iterations.
  2. Merge without considering robustness term
  3. Add robustness estimation.

Hi Brotherofken, in Fig.8 of the paper, it seems lamda1/lamda2 is in range of 0~1, however, we know lamda1 is the dominant eigenvalue and should be greater than lamda2. I am confused, do you know why is that?

@JVision
Copy link

JVision commented Sep 30, 2019

The reason is simple. there are errors in the published work. The author means (lamda1-lamda2)/(lambda1+lambda2)

@chencuber
Copy link

The reason is simple. there are errors in the published work. The author means (lamda1-lamda2)/(lambda1+lambda2)

I think you are right, that form makes sense.

@SuTanTank
Copy link

SuTanTank commented Oct 17, 2019

Is it possible that lambda2/lambda1 is the correct one? Seems also fit the (0~1) range

Update: I was wrong, this value should should go up when the pixel is more likely on an edge, i.e. lambda1 >> lambda2.

@brotherofken
Copy link
Contributor Author

brotherofken commented Oct 17, 2019

@SuTanTank Thanks! That totally make sense. Look at the end of a section 2.2 in Anisotropic Diffusionin Image Processing.
Axis description of Figure 8 says "Presence of an edge" (not coherence) and the book says that mu_1 >> mu_2 characterizes straight edges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants