Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Toy Calculator executor support #1158

Draft
wants to merge 14 commits into
base: main
Choose a base branch
from
Draft

feat: Toy Calculator executor support #1158

wants to merge 14 commits into from

Conversation

kratsg
Copy link
Contributor

@kratsg kratsg commented Oct 29, 2020

Description

Lorem ipsum dolor sit amet. See #807.

Checklist Before Requesting Reviewer

  • Tests are passing
  • "WIP" removed from the title of the pull request
  • Selected an Assignee for the PR to be responsible for the log summary

Before Merging

For the PR Assignees:

  • Summarize commit messages into a comprehensive review of the PR

@codecov
Copy link

codecov bot commented Oct 29, 2020

Codecov Report

Attention: Patch coverage is 62.50000% with 12 lines in your changes missing coverage. Please review.

Project coverage is 97.90%. Comparing base (997e5e5) to head (b88b5cf).

Current head b88b5cf differs from pull request most recent head 971a4a9

Please upload reports for the commit 971a4a9 to get more accurate results.

Files Patch % Lines
src/pyhf/infer/calculators.py 58.82% 6 Missing and 1 partial ⚠️
src/pyhf/futures.py 66.66% 5 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1158      +/-   ##
==========================================
- Coverage   98.21%   97.90%   -0.32%     
==========================================
  Files          69       69              
  Lines        4543     4339     -204     
  Branches      804      729      -75     
==========================================
- Hits         4462     4248     -214     
- Misses         48       57       +9     
- Partials       33       34       +1     
Flag Coverage Δ
contrib 26.41% <15.62%> (-71.39%) ⬇️
doctest 60.72% <62.50%> (-37.36%) ⬇️
unittests-3.10 95.78% <62.50%> (-0.46%) ⬇️
unittests-3.11 ?
unittests-3.12 ?
unittests-3.7 95.76% <62.50%> (?)
unittests-3.8 ?
unittests-3.9 ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@kratsg
Copy link
Contributor Author

kratsg commented Oct 29, 2020

script I used to quickly test functionality

import pyhf
import pyhf.futures
import concurrent
import time

model = pyhf.simplemodels.hepdata_like([5], [10], [3.5])
data = [12.5] + model.config.auxdata

ntoys = 500

if __name__ == '__main__':
    for executor in [pyhf.futures.TrivialExecutor(), concurrent.futures.ProcessPoolExecutor(), concurrent.futures.ThreadPoolExecutor()]:
        start = time.time()
        result = pyhf.infer.hypotest(1.0, data, model, qtilde=True, calctype='toybased', ntoys=ntoys, executor=executor)
        print(f'Executor = {executor}')
        print(f'CLs_obs = {result}')
        print(f'Took {time.time() - start} seconds for {ntoys} toys.')
        executor.shutdown()

which results in (numpy)

$ python toys.py 
Executor = <pyhf.futures.TrivialExecutor object at 0x12ccc4dc0>                                                                                                                                                                               
CLs_obs = 0.43142857142857144
Took 8.418382167816162 seconds for 500 toys.
Executor = <concurrent.futures.process.ProcessPoolExecutor object at 0x12ccc4eb0>                                                                                                                                                             
CLs_obs = 0.4542936288088643
Took 2.402064085006714 seconds for 500 toys.
Executor = <concurrent.futures.thread.ThreadPoolExecutor object at 0x12cd67370>                                                                                                                                                               
CLs_obs = 0.37700534759358284
Took 11.590389966964722 seconds for 500 toys.

and for 5k toys (numpy)

$ python toys.py 
Executor = <pyhf.futures.TrivialExecutor object at 0x138c9ce80>                                                                                                                                                                               
CLs_obs = 0.4236186348862406
Took 92.0139479637146 seconds for 5000 toys.

Executor = <concurrent.futures.process.ProcessPoolExecutor object at 0x138c9cf70>                                                                                                                                                             
CLs_obs = 0.4380434782608696
Took 24.271136045455933 seconds for 5000 toys.

Executor = <concurrent.futures.thread.ThreadPoolExecutor object at 0x138d02430>                                                                                                                                                               
CLs_obs = 0.4217391304347826
Took 130.05941200256348 seconds for 5000 toys.

and 5k toys (jax)

$ python toys.py 
Executor = <pyhf.futures.TrivialExecutor object at 0x14311fd60>                                                                                                                                                                               
CLs_obs = 0.44140945096968043
Took 81.22803473472595 seconds for 5000 toys.

Executor = <concurrent.futures.process.ProcessPoolExecutor object at 0x14311fe80>                                                                                                                                                             
CLs_obs = 0.44008774335069917
Took 547.9745662212372 seconds for 5000 toys.

Executor = <concurrent.futures.thread.ThreadPoolExecutor object at 0x14312a640>                                                                                                                                                               
CLs_obs = 0.4326608505997819
Took 93.99376606941223 seconds for 5000 toys.

and 5k toys (torch)

Executor = <pyhf.futures.TrivialExecutor object at 0x13f83f340>                                                                                                                                                                               
CLs_obs = 0.4310019016265869
Took 182.5893669128418 seconds for 5000 toys.

Executor = <concurrent.futures.process.ProcessPoolExecutor object at 0x13f858be0>                                                                                                                                                             
CLs_obs = 0.4109438955783844
Took 37.67385792732239 seconds for 5000 toys.

Executor = <concurrent.futures.thread.ThreadPoolExecutor object at 0x13f874c70>                                                                                                                                                               
CLs_obs = 0.4270491600036621
Took 158.8731780052185 seconds for 5000 toys.

and 5k toys (tensorflow)

Executor = <pyhf.futures.TrivialExecutor object at 0x14f4e64c0>                                                                                                                                                                               
CLs_obs = 0.42907899618148804
Took 901.4008986949921 seconds for 5000 toys.

Executor = <concurrent.futures.process.ProcessPoolExecutor object at 0x14f4e66d0>                                                                                                                                                             
CLs_obs = 0.4172525703907013
Took 177.639319896698 seconds for 5000 toys.

Executor = <concurrent.futures.thread.ThreadPoolExecutor object at 0x14f4e6bb0>                                                                                                                                                               
CLs_obs = 0.4322930872440338
Took 1238.2977120876312 seconds for 5000 toys.

@lgtm-com
Copy link

lgtm-com bot commented Oct 29, 2020

This pull request introduces 1 alert when merging 6a8f72e into 81c9adb - view on LGTM.com

new alerts:

  • 1 for Except block handles 'BaseException'

@kratsg
Copy link
Contributor Author

kratsg commented Mar 30, 2022

using this to test functionality

import pyhf
import pyhf.futures
import concurrent
import time

ntoys = 500

if __name__ == '__main__':

    for optimizer in ['scipy', 'minuit']:
        for backend in ['numpy', 'jax']:
            pyhf.set_backend(backend, optimizer)

            model = pyhf.simplemodels.uncorrelated_background(
                [5],
                [10],
                [3.5],
            )
            data = pyhf.tensorlib.astensor([12.5] + model.config.auxdata)

            print(f'Backend     = {pyhf.tensorlib.name}')
            print(f'Optimizer   = {pyhf.optimizer.name}')
            print(f'ntoys       = {ntoys}')

            for executor in [
                pyhf.futures.TrivialExecutor(),
                concurrent.futures.ProcessPoolExecutor(),
                concurrent.futures.ThreadPoolExecutor(),
            ]:
                start = time.time()
                result = pyhf.infer.hypotest(
                    1.0,
                    data,
                    model,
                    test_stat='qtilde',
                    calctype='toybased',
                    ntoys=ntoys,
                    executor=executor,
                )
                print(f'  * with Executor = {executor}')
                print(f'    - CLs_obs  = {result:0.6f}')
                print(f'    - Time     = {time.time() - start:0.6f} seconds')
                executor.shutdown()

@matthewfeickert matthewfeickert changed the base branch from master to main September 21, 2022 20:54
@kratsg kratsg added the experiment/belle2 Relevant to Belle-II's interests label Oct 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
experiment/belle2 Relevant to Belle-II's interests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant