-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bat_sample error #401
Comments
My notebook with some plots is also attached here. |
Hi @atasattari, BAT.jl has trouble running with only one chain, could you run with two chains at least? It's on the to-do list, we just haven't gotten around to fixing that yet (since people typically run with 4 or 8 chains). Also I recommend using BAT.jl#main currently instead of the last 2.x release, which is quite old. We're preparing a BAT.jl v3.0 but there's some changes to be made first. |
One important difference is that emcee can work in parallel (we should probably add an emcee-like sampler to BAT) and the many walkers share information about the posterior, so they "help each other" converges. While Metropolis or HMC single walker chains can converge more slowly, since each walker has to find the "way" to the typical set by itself. We have some code for improved burn in based on the "Pathfinder" algorithm and the Robust adaptive Metropolis In case it's a performance issue with you model, could you run |
Thanks so much for the hints. I performed a basic comparison between the log-likelihood call times in Julia and Python. The Python one is ~20-30% faster. I'm not a Julia expert, but there seem to be some tweaks to boost the performance. At least running in parallel seems to be much simpler in Julia. 'emcee' has issues pickling some of the Numba functions which prevent it to run in parallel. |
If you can send me your complete notebook (with necessary data files) I can take a look at potential performance issues. |
@atasattari is this still an active issue? |
@oschulz The issue is resolved. It was mainly slowness and a lack of log reports for debugging. Turned out to be repetitive integral calculations to find the PDF for custom models in the likelihood. This situation should be common for unbinned likelihood studies. Would it be reasonable to add a similar example to the tutorial? I can provide my code for reference if needed. |
Thanks you, yes. We we thinking about adding more examples/tutorials anyway. |
Hi,
I followed the tutorial to define a custom log-likelihood to give that to
bat_sample()
. My likelihood seems to work fine, however,bat_sample()
doesn't work for me. After some time I get the error below:AssertionError: length(chains) == nchains Stacktrace: [1] mcmc_init!(rng::Random123.Philox4x{UInt64, 10}, algorithm::MetropolisHastings{BAT.MvTDistProposal, RepetitionWeighting{Int64}, AdaptiveMHTuning}, density::PosteriorDensity{BAT.TransformedDensity{BAT.GenericDensity{var"#36#37"{Vector{Float64}, typeof(pdf)}}, BAT.DistributionTransform{ValueShapes.StructVariate{NamedTuple{(:offset, :resolution, :k)}}, BAT.InfiniteSpace, BAT.MixedSpace, BAT.StandardMvNormal{Float64}, NamedTupleDist{(:offset, :resolution, :k), Tuple{Product{Continuous, Uniform{Float64}, Vector{Uniform{Float64}}}, Uniform{Float64}, Uniform{Float64}}, Tuple{ValueAccessor{ArrayShape{Real, 1}}, ValueAccessor{ScalarShape{Real}}, ValueAccessor{ScalarShape{Real}}}}}, BAT.TDNoCorr}, BAT.DistributionDensity{BAT.StandardMvNormal{Float64}, BAT.HyperRectBounds{Float32}}, BAT.HyperRectBounds{Float32}, ArrayShape{Real, 1}}, nchains::Int64, init_alg::MCMCChainPoolInit, tuning_alg::AdaptiveMHTuning, nonzero_weights::Bool, callback::Function) @ BAT ~/.julia/packages/BAT/XvOy6/src/samplers/mcmc/chain_pool_init.jl:160
Any idea what could be the issue?
Reproducer:
The text was updated successfully, but these errors were encountered: