You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This paper suggest an approach which is based on a modification of data smoothing methods. This result in the interval estimates for the parameters given the noisy data.
This is probably the last Optimization technique I would be implementing before moving to Bayesian such as Turing.jl and Stan.jl.
No, but I just read it and I'm not sure what this gives you over the two-stage method with regularization. The full implicitness is either expensive, or it assumes local linearity on the parameters to use the implicit function theorem which may not be a very good assumption.
You don't need to worry about the Turing.jl right now since they are building a prototype: just wait a little bit and see what arrives and contribute to that.
The Stan.jl thing will be really important for making benchmarking and correctness-testing easier though. And it shouldn't be too difficult. I would recommending knocking that out first for biggest impact.
You want me to drop this?
I am in the initial phase of reading and understanding the research paper. We can actually do it for the sake of completeness(plus it gives interval estimations) if it won't be too much effort. What do you say?
I just think the Stan.jl thing is more important, and the optimization using likelihoods + regularization. We can come back to this if there's time. These kinds of smoothing approaches require that you have a lot of data, which isn't necessarily true in most cases, and in those cases two-stage + regularization should do very well anyways. So I'm not sure it's different enough to get priority.
This paper suggest an approach which is based on a modification of data smoothing methods. This result in the interval estimates for the parameters given the noisy data.
This is probably the last Optimization technique I would be implementing before moving to Bayesian such as Turing.jl and Stan.jl.
@ChrisRackauckas @finmod have you ever been through this paper?
The text was updated successfully, but these errors were encountered: