Skip to content

Commit

Permalink
Update pre-commit (#570)
Browse files Browse the repository at this point in the history
* Fix Stucchio URL

The backslashes are appearing in the actual URL

* Update bibtex-tidy

* Fix Padonou URL

* Increase Node version 15→18

* Run pre-commit autoupdate

* Run pre-commit on all files
  • Loading branch information
maresb authored Sep 8, 2023
1 parent 14a7fde commit 4b57405
Show file tree
Hide file tree
Showing 79 changed files with 131 additions and 576 deletions.
8 changes: 4 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
repos:
- repo: https://github.com/psf/black
rev: 22.3.0
rev: 23.7.0
hooks:
- id: black-jupyter
- repo: https://github.com/nbQA-dev/nbQA
rev: 1.1.0
rev: 1.7.0
hooks:
- id: nbqa-isort
additional_dependencies: [isort==5.6.4]
- id: nbqa-pyupgrade
additional_dependencies: [pyupgrade==2.7.4]
args: [--py37-plus]
- repo: https://github.com/MarcoGorelli/madforhooks
rev: 0.3.0
rev: 0.4.1
hooks:
- id: check-execution-order
args: [--strict]
Expand Down Expand Up @@ -96,7 +96,7 @@ repos:
language: pygrep
types_or: [markdown, rst, jupyter]
- repo: https://github.com/mwouts/jupytext
rev: v1.13.7
rev: v1.15.1
hooks:
- id: jupytext
files: ^examples/.+\.ipynb$
Expand Down
6 changes: 1 addition & 5 deletions examples/case_studies/GEV.myst.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,6 @@ kernelspec:
name: pymc4-dev
---

+++ {"tags": []}

# Generalized Extreme Value Distribution

:::{post} Sept 27, 2022
Expand All @@ -20,7 +18,7 @@ kernelspec:
:author: Colin Caprani
:::

+++ {"tags": []}
+++

## Introduction

Expand Down Expand Up @@ -94,8 +92,6 @@ And now set up the model using priors estimated from a quick review of the histo
- $\xi$: we are agnostic to the tail behaviour so centre this at zero, but limit to physically reasonable bounds of $\pm 0.6$, and keep it somewhat tight near zero.

```{code-cell} ipython3
:tags: []
# Optionally centre the data, depending on fitting and divergences
# cdata = (data - data.mean())/data.std()
Expand Down
2 changes: 0 additions & 2 deletions examples/case_studies/Missing_Data_Imputation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1934,7 +1934,6 @@
"\n",
"\n",
"def make_model(priors, normal_pred_assumption=True):\n",
"\n",
" coords = {\n",
" \"alpha_dim\": [\"lmx_imputed\", \"climate_imputed\", \"empower_imputed\"],\n",
" \"beta_dim\": [\n",
Expand Down Expand Up @@ -8628,7 +8627,6 @@
"\n",
"\n",
"with pm.Model(coords=coords) as hierarchical_model:\n",
"\n",
" # Priors\n",
" company_beta_lmx = pm.Normal(\"company_beta_lmx\", 0, 1)\n",
" company_beta_male = pm.Normal(\"company_beta_male\", 0, 1)\n",
Expand Down
2 changes: 0 additions & 2 deletions examples/case_studies/Missing_Data_Imputation.myst.md
Original file line number Diff line number Diff line change
Expand Up @@ -426,7 +426,6 @@ priors = {
def make_model(priors, normal_pred_assumption=True):
coords = {
"alpha_dim": ["lmx_imputed", "climate_imputed", "empower_imputed"],
"beta_dim": [
Expand Down Expand Up @@ -707,7 +706,6 @@ coords = {"team": teams, "employee": np.arange(len(df_employee))}
with pm.Model(coords=coords) as hierarchical_model:
# Priors
company_beta_lmx = pm.Normal("company_beta_lmx", 0, 1)
company_beta_male = pm.Normal("company_beta_male", 0, 1)
Expand Down
6 changes: 0 additions & 6 deletions examples/case_studies/bart_heteroscedasticity.myst.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,6 @@ kernelspec:
In this notebook we show how to use BART to model heteroscedasticity as described in Section 4.1 of [`pymc-bart`](https://github.com/pymc-devs/pymc-bart)'s paper {cite:p}`quiroga2022bart`. We use the `marketing` data set provided by the R package `datarium` {cite:p}`kassambara2019datarium`. The idea is to model a marketing channel contribution to sales as a function of budget.

```{code-cell} ipython3
:tags: []
import os
import arviz as az
Expand All @@ -37,8 +35,6 @@ import pymc_bart as pmb
```

```{code-cell} ipython3
:tags: []
%config InlineBackend.figure_format = "retina"
az.style.use("arviz-darkgrid")
plt.rcParams["figure.figsize"] = [10, 6]
Expand Down Expand Up @@ -157,8 +153,6 @@ The fit looks good! In fact, we see that the mean and variance increase as a fun
## Watermark

```{code-cell} ipython3
:tags: []
%load_ext watermark
%watermark -n -u -v -iv -w -p pytensor
```
Expand Down
40 changes: 0 additions & 40 deletions examples/case_studies/binning.myst.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,8 +106,6 @@ Hypothetically we could have used base python, or numpy, to describe the generat
The approach was illustrated with a Gaussian distribution, and below we show a number of worked examples using Gaussian distributions. However, the approach is general, and at the end of the notebook we provide a demonstration that the approach does indeed extend to non-Gaussian distributions.

```{code-cell} ipython3
:tags: []
import warnings
import arviz as az
Expand Down Expand Up @@ -219,8 +217,6 @@ We will start by investigating what happens when we use only one set of bins to
### Model specification

```{code-cell} ipython3
:tags: []
with pm.Model() as model1:
sigma = pm.HalfNormal("sigma")
mu = pm.Normal("mu")
Expand All @@ -235,8 +231,6 @@ pm.model_to_graphviz(model1)
```

```{code-cell} ipython3
:tags: []
with model1:
trace1 = pm.sample()
```
Expand All @@ -248,8 +242,6 @@ Given the posterior values,
we should be able to generate observations that look close to what we observed.

```{code-cell} ipython3
:tags: []
with model1:
ppc = pm.sample_posterior_predictive(trace1)
```
Expand Down Expand Up @@ -294,22 +286,16 @@ The more important question is whether we have recovered the parameters of the d
Recall that we used `mu = -2` and `sigma = 2` to generate the data.

```{code-cell} ipython3
:tags: []
az.plot_posterior(trace1, var_names=["mu", "sigma"], ref_val=[true_mu, true_sigma]);
```

Pretty good! And we can access the posterior mean estimates (stored as [xarray](http://xarray.pydata.org/en/stable/index.html) types) as below. The MCMC samples arrive back in a 2D matrix with one dimension for the MCMC chain (`chain`), and one for the sample number (`draw`). We can calculate the overall posterior average with `.mean(dim=["draw", "chain"])`.

```{code-cell} ipython3
:tags: []
trace1.posterior["mu"].mean(dim=["draw", "chain"]).values
```

```{code-cell} ipython3
:tags: []
trace1.posterior["sigma"].mean(dim=["draw", "chain"]).values
```

Expand All @@ -324,8 +310,6 @@ Above, we used one set of binned data. Let's see what happens when we swap out f
As with the above, here's the model specification.

```{code-cell} ipython3
:tags: []
with pm.Model() as model2:
sigma = pm.HalfNormal("sigma")
mu = pm.Normal("mu")
Expand All @@ -336,15 +320,11 @@ with pm.Model() as model2:
```

```{code-cell} ipython3
:tags: []
with model2:
trace2 = pm.sample()
```

```{code-cell} ipython3
:tags: []
az.plot_trace(trace2);
```

Expand All @@ -353,23 +333,17 @@ az.plot_trace(trace2);
Let's run a PPC check to ensure we are generating data that are similar to what we observed.

```{code-cell} ipython3
:tags: []
with model2:
ppc = pm.sample_posterior_predictive(trace2)
```

We calculate the mean bin posterior predictive bin counts, averaged over samples.

```{code-cell} ipython3
:tags: []
ppc.posterior_predictive.counts2.mean(dim=["chain", "draw"]).values
```

```{code-cell} ipython3
:tags: []
c2.values
```

Expand Down Expand Up @@ -399,14 +373,10 @@ az.plot_posterior(trace2, var_names=["mu", "sigma"], ref_val=[true_mu, true_sigm
```

```{code-cell} ipython3
:tags: []
trace2.posterior["mu"].mean(dim=["draw", "chain"]).values
```

```{code-cell} ipython3
:tags: []
trace2.posterior["sigma"].mean(dim=["draw", "chain"]).values
```

Expand All @@ -419,8 +389,6 @@ Now we need to see what happens if we add in both ways of binning.
### Model Specification

```{code-cell} ipython3
:tags: []
with pm.Model() as model3:
sigma = pm.HalfNormal("sigma")
mu = pm.Normal("mu")
Expand All @@ -442,8 +410,6 @@ pm.model_to_graphviz(model3)
```

```{code-cell} ipython3
:tags: []
with model3:
trace3 = pm.sample()
```
Expand Down Expand Up @@ -488,20 +454,14 @@ ax[1].set_title("Seven bin discretization of N(-2, 2)")
### Recovering parameters

```{code-cell} ipython3
:tags: []
trace3.posterior["mu"].mean(dim=["draw", "chain"]).values
```

```{code-cell} ipython3
:tags: []
trace3.posterior["sigma"].mean(dim=["draw", "chain"]).values
```

```{code-cell} ipython3
:tags: []
az.plot_posterior(trace3, var_names=["mu", "sigma"], ref_val=[true_mu, true_sigma]);
```

Expand Down
1 change: 0 additions & 1 deletion examples/case_studies/blackbox_external_likelihood.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -499,7 +499,6 @@
"source": [
"# define a theano Op for our likelihood function\n",
"class LogLikeWithGrad(tt.Op):\n",
"\n",
" itypes = [tt.dvector] # expects a vector of parameter values when called\n",
" otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)\n",
"\n",
Expand Down
1 change: 0 additions & 1 deletion examples/case_studies/blackbox_external_likelihood.myst.md
Original file line number Diff line number Diff line change
Expand Up @@ -423,7 +423,6 @@ It's not quite so simple! The `grad()` method itself requires that its inputs ar
```{code-cell} ipython3
# define a theano Op for our likelihood function
class LogLikeWithGrad(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -438,7 +438,6 @@
"source": [
"# define a pytensor Op for our likelihood function\n",
"class LogLikeWithGrad(pt.Op):\n",
"\n",
" itypes = [pt.dvector] # expects a vector of parameter values when called\n",
" otypes = [pt.dscalar] # outputs a single scalar value (the log likelihood)\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,6 @@ It's not quite so simple! The `grad()` method itself requires that its inputs ar
```{code-cell} ipython3
# define a pytensor Op for our likelihood function
class LogLikeWithGrad(pt.Op):
itypes = [pt.dvector] # expects a vector of parameter values when called
otypes = [pt.dscalar] # outputs a single scalar value (the log likelihood)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,6 @@ myst:
extra_dependencies: geopandas libpysal
---

+++ {"tags": []}

(conditional_autoregressive_priors)=
# Conditional Autoregressive (CAR) Models for Spatial Data

Expand Down Expand Up @@ -82,8 +80,6 @@ except FileNotFoundError:
```

```{code-cell} ipython3
:tags: []
df_scot_cancer.head()
```

Expand Down
1 change: 0 additions & 1 deletion examples/case_studies/disaster_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,6 @@
year = arange(1851, 1962)

with pm.Model() as model:

switchpoint = pm.DiscreteUniform("switchpoint", lower=year.min(), upper=year.max())
early_mean = pm.Exponential("early_mean", lam=1.0)
late_mean = pm.Exponential("late_mean", lam=1.0)
Expand Down
1 change: 0 additions & 1 deletion examples/case_studies/disaster_model_theano_op.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,6 @@ def rate_(switchpoint, early_mean, late_mean):


with pm.Model() as model:

# Prior for distribution of switchpoint location
switchpoint = pm.DiscreteUniform("switchpoint", lower=0, upper=years)
# Priors for pre- and post-switch mean number of disasters
Expand Down
1 change: 0 additions & 1 deletion examples/case_studies/gelman_bioassay.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
dose = array([-0.86, -0.3, -0.05, 0.73])

with pm.Model() as model:

# Logit-linear model parameters
alpha = pm.Normal("alpha", 0, sigma=100.0)
beta = pm.Normal("beta", 0, sigma=1.0)
Expand Down
1 change: 0 additions & 1 deletion examples/case_studies/gelman_schools.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
sigma = np.array([15, 10, 16, 11, 9, 11, 10, 18])

with Model() as schools:

eta = Normal("eta", 0, 1, shape=J)
mu = Normal("mu", 0, sigma=1e6)
tau = HalfCauchy("tau", 25)
Expand Down
2 changes: 0 additions & 2 deletions examples/case_studies/hierarchical_partial_pooling.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,6 @@
"coords = {\"player_names\": player_names.tolist()}\n",
"\n",
"with pm.Model(coords=coords) as baseball_model:\n",
"\n",
" phi = pm.Uniform(\"phi\", lower=0.0, upper=1.0)\n",
"\n",
" kappa_log = pm.Exponential(\"kappa_log\", lam=1.5)\n",
Expand Down Expand Up @@ -186,7 +185,6 @@
"outputs": [],
"source": [
"with baseball_model:\n",
"\n",
" theta_new = pm.Beta(\"theta_new\", alpha=phi * kappa, beta=(1.0 - phi) * kappa)\n",
" y_new = pm.Binomial(\"y_new\", n=4, p=theta_new, observed=0)"
]
Expand Down
2 changes: 0 additions & 2 deletions examples/case_studies/hierarchical_partial_pooling.myst.md
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,6 @@ player_names = data["FirstName"] + " " + data["LastName"]
coords = {"player_names": player_names.tolist()}
with pm.Model(coords=coords) as baseball_model:
phi = pm.Uniform("phi", lower=0.0, upper=1.0)
kappa_log = pm.Exponential("kappa_log", lam=1.5)
Expand All @@ -110,7 +109,6 @@ Recall our original question was with regard to the true batting average for a p

```{code-cell} ipython3
with baseball_model:
theta_new = pm.Beta("theta_new", alpha=phi * kappa, beta=(1.0 - phi) * kappa)
y_new = pm.Binomial("y_new", n=4, p=theta_new, observed=0)
```
Expand Down
Loading

0 comments on commit 4b57405

Please sign in to comment.