Active development? #96
Replies: 3 comments 1 reply
-
Hi Ben, Thanks so much for reaching out, it's great to hear from you! We are still very much actively developing this package, despite what the recent commit history might suggest :-p I think the reasons for the drop in public-facing commit frequency are
You can view a list of planned development items here. I think highest on our list is the "loose visibility" connector using TorchKbNufft (WIP), followed by exploration of regularizers for the spectral dimension. There are a couple ongoing ALMA programs for which we are using MPoL as part of the RML/imaging analysis, too. Your idea to utilize/adapt MPoL for JWST AMI sounds very exciting! I'm definitely interested in learning more and, if possible, adapting MPoL to work for these type of data. As I've been learning more about imaging, RML, and developing this package, I have always maintained a hope that this could become a general purpose RML package for all types of interferometric data, since, as far as I can tell, the basic math is quite universal. I think the modular nature of these neural network packages (in our case, PyTorch, but also applies to JAX) lends itself very well to construction of flexible, instrument-specific workflows. Thus far, I've lacked a knowledgeable collaborator to help extend to SAM, so hopefully this is an opportunity for growth :) About ~1.5 yrs ago I tried to run a case study of PyTorch v. JAX to see whether it made sense to switch backends. At the time, I concluded that PyTorch was more mature/documented (and easier to understand, for me) and the modular NN design made a lot of sense for our applications. I don't have much recent experience with JAX, so those conclusions might need to be updated in 2022. However, all of the functionality that led me to choose PyTorch in the first place is still there and maintained so I think the functionality of MPoL itself is on solid ground. I was also comforted by the fact that (at least by "feel") the fraction of MPoL code that is PyTorch-specific feels relatively small and contained. This has helped me sleep at night in case the code needs to be updated in a few years if/when there are radical changes in the autodiff landscape. In your opinion, what would be the benefits of using JAX over PyTorch here? I see that @dfm has created an awesome set of JAX bindings to finufft, so this could be a candidate for replacement for TorchKbNufft (though we might need to wait for GPU support, since that's critical for some of our image-cube applications). If you wouldn't mind sketching out your requirements for JWST/AMI, I could comment on whether the functionality exists in MPoL and/or how difficult it would be to add it. I think it would be really great to use the JWST/AMI application to help drive MPoL towards a more general purpose imaging package, and am happy to support changing the codebase to make it possible. Best, |
Beta Was this translation helpful? Give feedback.
-
Hi Ian, Awesome, love to hear it. Very briefly re Jax, the truth is the main reason I want to do it is that our differentiable optics simulation code dLux is in Jax, and more generally a lot of packages are moving that way in the near term. There is an excellent package equinox which abstracts away a lot of the more restrictive syntax in Jax and gives a nice object-oriented framework, and PyMC now has a Jax NUTS backend. It has overall excellent performance on GPUs (though sadly not Apple silicon yet) and is pretty friendly. My general feeling is that Jax is going to eat other frameworks over the next few years, though I certainly wouldn't bet my house on it and PyTorch is of course a powerful and mature system itself. For AMI, we have a reasonably stable ecosystem of packages (eg our team's package AMICAL, the STScI package ImPlaneIA, and the closed-source SAMPip) for extracting calibrated squared visibilities and closure phases from the JWST NIRISS mask (and other aperture masking instruments). Unlike in the radio, we do not have any complex visibilities or receivers with uncorrupted phase information. AMICAL and CANDID then do pretty well on fitting parametric models to data. But we aren't settled on what to do for nonparametric image reconstruction, and personally I want to differentiably simulate the entire thing end-to-end and include the 'calibration' in the model fitting, which is partly what dLux is designed to do. There are already pretty good nonparametric image reconstruction codes (SQUEEZE, Mira, and the as yet unpublished generative NN libraries by our collaborators Dori Blakely and Joel Sanchez-Bermudez), but more can't hurt, especially when at very high resolution, high SNR, but poor Fourier coverage, every result is likely to be controversial unless we can nail down the whole pipeline. We have some GTO AMI data coming up and we want to nail it first go, no controversies. So my thought had been to build a Jax differentiable deconvolution code to progressively include MaxEnt, TV, smoothness, etc regularizers, and connect this to dLux. But if your MPoL has this functionality, or you're likely to build something like this in the near future, it could be worth collaborating! All the best, Ben |
Beta Was this translation helpful? Give feedback.
-
Re NN regularization -
Happy to work with you on using this with AMI data - I'm not a big PyTorch user but it should be ok. It might be worth getting in touch again after stage 1 about stage 2 ; stage 3 is trivial, and JWST data in stage 4 should lead to substantial, though entirely non-monetary, profit. |
Beta Was this translation helpful? Give feedback.
-
Hi Ian,
Love this repo - seems great! I was just wondering if this is still under active development - our team were hoping to build a small package for regularized image reconstruction from the JWST AMI sparse aperture masking instrument, and I was going to suggest we build this in Jax, but was curious about adapting MPoL too which is much more mature.
What do you think is the best way forward?
All the best,
Ben
Beta Was this translation helpful? Give feedback.
All reactions