Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Period sampling #25

Merged
merged 15 commits into from
Dec 4, 2023
Merged

Period sampling #25

merged 15 commits into from
Dec 4, 2023

Conversation

AngusMcLure
Copy link
Owner

Implements fi_pool_cluster_random and optimise_random_prevalence for evaluating and designing surveys where the number of units caught at each site (or overall) are random. Introduces a few pieces of machinery (catch distributions and pooling strategies) to make this work. Also quickly ran into problems with convergence of integrals with fi_pool_cluster (which is called repeatedly by the new functions, often in 'edge' cases). I have addressed the worst of these so that this will work, but there are some more improvements I have identified (which I've listed in a separate issue: #24)

Inludes implementation of expected value helper function, reworking the fi_X_random functions to use this expectation, reworking of fisher_information_cluster to improve convergence of integrals, and an integer search function to do the integer optimisation in optimise_random_prevalence
Fix fisher_information.R conflicts, mainly prefix functions with stats::
Copy link
Collaborator

@fredjaya fredjaya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There seems to an issue with this line that causes the example and tests to throw an error and warnings:

stats::integrate(integrand, 0, 1, y = y, rel.tol = tol, abs.tol = tol)$value *

Running the example:

fi_pool_cluster_random(
    catch_dist = nb_catch(mean = 10, variance = 50),
    pool_strat = pool_target_number(target_number = 2),
    prevalence = 0.01, correlation = 0.05,
    sensitivity = 0.95, specificity = 0.99,
    form = 'logitnorm')

Results in

 $mu
  [1] -5.656014
  
  $sigma
  [1] 1.499623
  
  $p
   [1] 0.500000000 0.013046736 0.986953264 0.067468317 0.932531683 0.160295216
   [7] 0.839704784 0.283302303 0.716697697 0.425562831 0.574437169 0.002171418
  [13] 0.997828582 0.034921254 0.965078746 0.109591137 0.890408863 0.218621433
  [19] 0.781378567 0.352803569 0.647196431
  
  $s
  [1] 1
  
  $y
  [1] 0
  
  $N
  [1] 2
  
  $out
   [1] 1.915279e-04 1.329318e+01          NaN 4.647553e-01 4.092094e-10
   [6] 3.820055e-02 1.851309e-07 4.399644e-03 5.110521e-06 5.538236e-04
  [11] 6.299291e-05 1.139473e+02          NaN 2.131901e+00          NaN
  [16] 1.247683e-01 1.764559e-08 1.266624e-02 1.130488e-06 1.561988e-03
  [21] 1.909965e-05
  
  $problemp
  [1] 0.9869533 0.9978286 0.9650787
  
  Error in stats::integrate(integrand, 0, 1, y = y, rel.tol = tol, abs.tol = tol) : 
    non-finite function value
In addition: Warning messages:
1: In log(one_minus_phi(pj)) : NaNs produced
2: In log(one_minus_phi(pj)) : NaNs produced
3: In log(one_minus_phi(pj)) :
  Calls: fi_pool_cluster_random ... fn -> fi_pool_cluster -> apply -> FUN -> <Anonymous>
  Execution halted

Can you reproduce the error and warnings when running check() or test() in a new R session @AngusMcLure ?

@AngusMcLure
Copy link
Owner Author

Hi Fred,
Yes, I was able to reproduce the problem. The issue was fixed in c03a54e. Running the tests you have created identified other problems, most of which have been fixed in e29d110, db687bf, 013326c. There are still a few tests failing to converge, but these tests have quite extreme values of inputs, so I'm surprised they were ever converging before, and the test values they are testing against seem wrong anyway, so I think the tests need to be revisited.

@AngusMcLure
Copy link
Owner Author

Merging even though the tests are failing as I think it's the tests that need to be fixed in this case

@AngusMcLure AngusMcLure merged commit 2e4878e into main Dec 4, 2023
0 of 6 checks passed
@fredjaya
Copy link
Collaborator

fredjaya commented Dec 4, 2023

Great! Is it safe to assume that all the tests that fail are the incorrect ones that need amending?

@AngusMcLure
Copy link
Owner Author

Yes, but I think we should change (or add) a few others. Might be easier to go through them together in a call? I can arrange tomorrow, when I have some time for this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants