Skip to content

Commit

Permalink
Merge branch 'main' into signal_efficiency
Browse files Browse the repository at this point in the history
  • Loading branch information
hammannr authored Aug 2, 2024
2 parents d92c002 + dbfbb8f commit 2b9f1a1
Show file tree
Hide file tree
Showing 9 changed files with 40 additions and 11 deletions.
2 changes: 1 addition & 1 deletion .bumpversion.cfg
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.2.4
current_version = 0.2.6
files = setup.py alea/__init__.py
commit = True
tag = True
2 changes: 1 addition & 1 deletion .github/workflows/pypi_install.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ name: PyPI
on:
workflow_dispatch:
release:
types: [ created ]
types: [published]

jobs:
build:
Expand Down
33 changes: 33 additions & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,36 @@
0.2.6 / 2024-07-31
------------------
* Defunctionalize `apply_efficiency`, apply efficiency when `efficiency_name` is specified by @dachengx in https://github.com/XENONnT/alea/pull/183

**Full Changelog**: https://github.com/XENONnT/alea/compare/v0.2.5...v0.3.0


0.2.5 / 2024-07-30
------------------
* Consistent sorting for BlueiceExtendedModel by @hammannr in https://github.com/XENONnT/alea/pull/149
* Fixed data storing by @hammannr in https://github.com/XENONnT/alea/pull/152
* Add lxml_html_clean to fix readthedocs building error by @zihaoxu98 in https://github.com/XENONnT/alea/pull/157
* Fitting index variables by @zihaoxu98 in https://github.com/XENONnT/alea/pull/156
* Print Argument combinations to be submitted by @hammannr in https://github.com/XENONnT/alea/pull/151
* Minor changes to fitting index variables (PR #156) by @hammannr in https://github.com/XENONnT/alea/pull/159
* Set `i_batch` for `SubmitterLocal` when submitting by @dachengx in https://github.com/XENONnT/alea/pull/164
* Debug for interpolator deduction of `NeymanConstructor` by @dachengx in https://github.com/XENONnT/alea/pull/165
* The first i batch should be 0 by @dachengx in https://github.com/XENONnT/alea/pull/166
* Try prefix every file path in likelihood configuration with template folder by @dachengx in https://github.com/XENONnT/alea/pull/169
* Forbid prexing every key when adapt_likelihood_config_for_blueice by @FaroutYLq in https://github.com/XENONnT/alea/pull/170
* Refactored Pegasus-based OSG submitter by @FaroutYLq in https://github.com/XENONnT/alea/pull/163
* Try fixing https://github.com/XENONnT/alea/issues/173 by @dachengx in https://github.com/XENONnT/alea/pull/176
* Allow assigning kwargs in debug mode by @dachengx in https://github.com/XENONnT/alea/pull/174
* Allow `confidence_level` in filename by @dachengx in https://github.com/XENONnT/alea/pull/179
* Add 68% coverage as one of the defaults of `confidence_levels` by @dachengx in https://github.com/XENONnT/alea/pull/180
* Document to increase CPUs by @FaroutYLq in https://github.com/XENONnT/alea/pull/178

New Contributors
* @FaroutYLq made their first contribution in https://github.com/XENONnT/alea/pull/170

**Full Changelog**: https://github.com/XENONnT/alea/compare/v0.2.4...v0.2.5


0.2.4 / 2024-03-18
------------------
* Point away from alea for physics models by @kdund in https://github.com/XENONnT/alea/pull/143
Expand Down
2 changes: 1 addition & 1 deletion alea/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "0.2.4"
__version__ = "0.2.6"

from .parameters import *

Expand Down
2 changes: 0 additions & 2 deletions alea/examples/configs/unbinned_wimp_statistical_model.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,6 @@ likelihood_config:
named_parameters:
- wimp_mass
template_filename: wimp{wimp_mass:d}gev_template.ii.h5
apply_efficiency: True
efficiency_name: signal_efficiency

# SR1
Expand Down Expand Up @@ -134,5 +133,4 @@ likelihood_config:
named_parameters:
- wimp_mass
template_filename: wimp{wimp_mass:d}gev_template.ii.h5
apply_efficiency: True
efficiency_name: signal_efficiency
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,6 @@ likelihood_config:
- wimp_mass
- signal_efficiency
template_filename: wimp50gev_template.ii.h5
apply_efficiency: True
efficiency_name: signal_efficiency

# SR3, 1D inference on cS2 space
Expand Down Expand Up @@ -127,5 +126,4 @@ likelihood_config:
- signal_efficiency
template_filename: wimp50gev_template.ii.h5
spectrum_name: test_cs1_spectrum.json
apply_efficiency: True
efficiency_name: signal_efficiency
2 changes: 1 addition & 1 deletion alea/models/blueice_extended_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -360,7 +360,7 @@ def _build_ll_from_config(
)

# set efficiency parameters
if source.get("apply_efficiency", False):
if source.get("efficiency_name", None):
self._set_efficiency(source, ll)

# set shape parameters
Expand Down
4 changes: 2 additions & 2 deletions alea/submitters/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Following this as an example
htcondor_configurations:
template_path: "/ospool/uc-shared/project/xenon/binference_common/binference_common/nt_cevns_templates/v7"
cluster_size: 1
request_cpus: 1
request_cpus: 4
request_memory: 2000
request_disk: 2000000
combine_disk: 20000000
Expand All @@ -46,7 +46,7 @@ htcondor_configurations:
```
- `template_path`: where you put your input templates. Note that **all files have to have unique names**. All templates inside will be tarred and the tarball will be uploaded to the grid when computing.
- `cluster_size`: clustering multiple `alea-run_toymc` jobs into a single job. For example, now you expect to run 100 individual `alea-run_toymc` jobs, and you specified `cluster_size: 10`, there will be only 10 `alea-run_toymc` in the end, each containing 10 jobs to run in sequence. Unless you got crazy amount of jobs like >200, I don't recommend changing it from 1.
- `request_cpus`: number of CPUs for each job. The default 1 should be good.
- `request_cpus`: number of CPUs for each job. It should be larger than alea max multi-threading number, otherwise OSG will complains.
- `request_memory`: requested memory for each job in unit of MB. Please don't put a number larger than what you need, because it will significantly reduce our available slots.
- `request_disk`: requested disk for each job in unit of KB. Please don't put a number larger than what you need, because it will significantly reduce our available slots.
- `combine_disk`: requested disk for combine job in unit of KB. In most cases 20GB is enough.
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ def open_requirements(path):

setuptools.setup(
name="alea-inference",
version="0.2.4",
version="0.2.6",
description="A tool to perform toyMC-based inference constructions",
author="Alea contributors, the XENON collaboration",
long_description=readme + "\n\n" + history,
Expand Down

0 comments on commit 2b9f1a1

Please sign in to comment.