Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generalized_opt_routine #652

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
a368788
generalized_opt_routine
carolinafernandezp May 24, 2023
a7a2f51
add syn weights, drives, and increase time & iter
carolinafernandezp Jun 1, 2023
88ac514
Draft basic Optimizer class
carolinafernandezp Jun 8, 2023
203e95c
Draft opt class and functions based on comments
carolinafernandezp Jun 15, 2023
3d97213
added pep8 formatting
carolinafernandezp Jun 16, 2023
8f3dfef
Draft opt class and functions based on comments
carolinafernandezp Jun 15, 2023
c78406b
added pep8 formatting
carolinafernandezp Jun 16, 2023
69503bc
Address comments for more generalized routine
carolinafernandezp Jun 22, 2023
4db96dd
added methods to remove 1/f and compute psd
carolinafernandezp Jun 29, 2023
4cd4f4a
add optimize rhythmic function
carolinafernandezp Jul 12, 2023
2977aa1
Draft opt class and functions based on comments
carolinafernandezp Jun 15, 2023
d8b9cb8
added pep8 formatting
carolinafernandezp Jun 16, 2023
726bfd8
Address comments for more generalized routine
carolinafernandezp Jun 22, 2023
9530dd7
Clean up optimize evoked and example
carolinafernandezp Jul 27, 2023
7823264
Address comments for more generalized routine
carolinafernandezp Jun 22, 2023
83ac65f
Update hnn_core/opt_toy_example/general.py
carolinafernandezp Jun 22, 2023
3f4d44b
Delete file
carolinafernandezp Jul 27, 2023
28915fa
Add tests and address comments
carolinafernandezp Aug 3, 2023
64977af
Address comments and fix test script
carolinafernandezp Aug 7, 2023
af81c71
fix test script
carolinafernandezp Aug 8, 2023
fe1a3fe
put files in right path, address comments
carolinafernandezp Aug 10, 2023
e08509e
change n_procs
carolinafernandezp Aug 10, 2023
fcd3e8e
remove blank spaces
carolinafernandezp Aug 10, 2023
f107131
address comments
carolinafernandezp Aug 10, 2023
c3aee67
move optimization.py file
carolinafernandezp Aug 10, 2023
7bef156
add old optimization.py file to new optimization dir
carolinafernandezp Aug 10, 2023
c376fb8
reduce example to one drive, justify syn weight ranges, parametrize t…
carolinafernandezp Aug 10, 2023
1f068cb
Address comments
carolinafernandezp Aug 14, 2023
aff7ce9
Update whats_new.rst
carolinafernandezp Aug 14, 2023
465c666
update api.rst
carolinafernandezp Aug 15, 2023
c7443ed
Rename optimization test file
carolinafernandezp Aug 15, 2023
3453dd8
update whats_new.rst
carolinafernandezp Aug 17, 2023
aa3dbab
update whats_new.rst
carolinafernandezp Aug 17, 2023
02be065
update flake8, fix imports
carolinafernandezp Aug 17, 2023
54d051f
doc attributes, expose max_iter, update_example
carolinafernandezp Aug 17, 2023
b7877f3
update repr, attributes, and mispelling in example
carolinafernandezp Aug 21, 2023
8a91173
update example, update relative imports, nest gp_minimize import, cha…
carolinafernandezp Aug 22, 2023
489d169
address naming conv comments, rename metrics.py, add function signatu…
carolinafernandezp Aug 24, 2023
82e28c7
update imports in optimization/optimize_evoked.py
carolinafernandezp Aug 24, 2023
27db0ac
Update unit_tests.yml
carolinafernandezp Aug 24, 2023
cd56857
Update windows_unit_tests.yml
carolinafernandezp Aug 24, 2023
7aa7be1
test exception raised with pytest.raises
carolinafernandezp Aug 24, 2023
ac9be58
add bayesopt.py
carolinafernandezp Aug 24, 2023
e07afc5
Update unit_tests.yml
carolinafernandezp Aug 24, 2023
31939e5
Update windows_unit_tests.yml
carolinafernandezp Aug 24, 2023
7423b25
adapt bayesian optimization functions
carolinafernandezp Aug 25, 2023
4a362fa
update bayesian optimizer, fix reduced net, add dependency
carolinafernandezp Aug 25, 2023
f9d3c07
remove comment
carolinafernandezp Aug 25, 2023
8a32774
update email address, cleanup bayesopt.py
carolinafernandezp Aug 28, 2023
817f132
Update unit_tests.yml
carolinafernandezp Aug 25, 2023
27cefc4
Update windows_unit_tests.yml
carolinafernandezp Aug 25, 2023
f29330f
update example
carolinafernandezp Aug 30, 2023
e97b3e9
update example, use dipole.py _rmse
carolinafernandezp Aug 30, 2023
ecdd6f5
update bayesopt.py
carolinafernandezp Aug 30, 2023
9f92685
update README
carolinafernandezp Aug 31, 2023
fa02957
remove basinhopping import in bayesopt.py
carolinafernandezp Aug 31, 2023
7df94ea
fix flake8 errors
carolinafernandezp Aug 31, 2023
4a56279
fix PEP formatting in test_general_optimization.py
carolinafernandezp Aug 31, 2023
07f8a72
Update unit_tests.yml
carolinafernandezp Sep 2, 2023
ec2391c
Update windows_unit_tests.yml
carolinafernandezp Sep 2, 2023
bb43e69
add func to _run_optimization
carolinafernandezp Sep 7, 2023
3b4ec36
update optimize_evoked.py
carolinafernandezp Sep 7, 2023
95df184
update README instructions
carolinafernandezp Sep 7, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/unit_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ jobs:
- name: Install HNN-core
shell: bash -el {0}
run: |
pip install --verbose '.[gui]'
pip install --verbose '.[gui,opt]'
- name: Lint with flake8
shell: bash -el {0}
run: |
Expand All @@ -55,4 +55,4 @@ jobs:
- name: Upload code coverage
shell: bash -el {0}
run: |
bash <(curl -s https://codecov.io/bash) -f ./coverage.xml
bash <(curl -s https://codecov.io/bash) -f ./coverage.xml
2 changes: 1 addition & 1 deletion .github/workflows/windows_unit_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ jobs:
python -m pip install --upgrade pip
pip install flake8 pytest pytest-cov
pip install psutil joblib
pip install --verbose .[gui]
pip install --verbose .[gui,opt]
- name: Test with pytest
shell: cmd
run: |
Expand Down
12 changes: 12 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,11 @@ GUI
* ipywidgets (<=7.7.1)
* voila (<=0.3.6)

Optimization
~~~~~~~~~~~~

* scikit-learn

Parallel processing
~~~~~~~~~~~~~~~~~~~

Expand Down Expand Up @@ -61,6 +66,13 @@ To check if everything worked fine, you can do::

and it should not give any error messages.

**Installing optimization dependencies**

If you are using bayesian optimization, then scikit-learn is required. Install
hnn-core with scikit-learn using the following command::

$ pip install hnn_core[opt]

**GUI installation**

To install the GUI dependencies along with ``hnn-core``, a simple tweak to the above command is needed::
Expand Down
2 changes: 1 addition & 1 deletion doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Optimization (:py:mod:`hnn_core.optimization`):
.. autosummary::
:toctree: generated/

optimize_evoked
Optimizer

Dipole (:py:mod:`hnn_core.dipole`):
-----------------------------------
Expand Down
4 changes: 4 additions & 0 deletions doc/whats_new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,10 @@ API
- :func:`~hnn_core.Dipole.write` and :func:`~hnn_core.Dipole.read_dipoles`
now support hdf5 format for read/write Dipole object, by
`Rajat Partani`_ in :gh:`648`
rythorpe marked this conversation as resolved.
Show resolved Hide resolved

- Add ability to optimize parameters associated with evoked drives and plot
convergence. User can constrain parameter ranges and specify solver,
by `Carolina Fernandez Pujol`_ in :gh:`652`
Comment on lines +41 to +44
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you should leave this for the last when everyone has agreed the PR is ready for merge. This file is prone to rebase conflicts


.. _0.3:

Expand Down
223 changes: 157 additions & 66 deletions examples/howto/optimize_evoked.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,21 +7,21 @@
of the model simulation to match an experimental dipole waveform.
"""

# Authors: Blake Caldwell <[email protected]>
# Authors: Carolina Fernandez <[email protected]>
# Nick Tolley <[email protected]>
# Ryan Thorpe <[email protected]>
# Mainak Jas <[email protected]>

import os.path as op

import numpy as np
import matplotlib.pyplot as plt

###############################################################################
# Let us import hnn_core

import hnn_core
from hnn_core import (MPIBackend, jones_2009_model, read_params,
simulate_dipole, read_dipole)

from hnn_core import (MPIBackend, jones_2009_model, simulate_dipole,
read_dipole)

hnn_core_root = op.join(op.dirname(hnn_core.__file__))

Expand All @@ -34,6 +34,7 @@
# This is a different experiment than the one to which the base parameters were
# tuned. So, the initial RMSE will be large, giving the optimization procedure
# a lot to work with.

from urllib.request import urlretrieve

data_url = ('https://raw.githubusercontent.com/jonescompneurolab/hnn/master/'
Expand All @@ -42,82 +43,172 @@
exp_dpl = read_dipole('S1_SupraT.txt')

###############################################################################
# Read the base parameters from a file
params_fname = op.join(hnn_core_root, 'param', 'default.json')
params = read_params(params_fname)
# Let’s then simulate the dipole with some initial parameters.

###############################################################################
# Let's first simulate the dipole with some initial parameters. The parameter
# definitions also contain the drives. Even though we could add drives
# explicitly through our API
# (see :ref:`sphx_glr_auto_examples_workflows_plot_simulate_evoked.py`),
# for conciseness,
# we add them automatically from the parameter files

scale_factor = 3000.
smooth_window_len = 30.
tstop = exp_dpl.times[-1]
net = jones_2009_model(params=params, add_drives_from_params=True)
with MPIBackend(n_procs=n_procs):
print("Running simulation with initial parameters")
initial_dpl = simulate_dipole(net, tstop=tstop, n_trials=1)[0]
initial_dpl = initial_dpl.scale(scale_factor).smooth(smooth_window_len)
scale_factor = 3000
smooth_window_len = 30

net_init = jones_2009_model()

# Proximal 1
weights_ampa_p1 = {'L2_basket': 0.2913, 'L2_pyramidal': 0.9337,
'L5_basket': 0.1951, 'L5_pyramidal': 0.3602}
weights_nmda_p1 = {'L2_basket': 0.9240, 'L2_pyramidal': 0.0845,
'L5_basket': 0.5849, 'L5_pyramidal': 0.65105}
synaptic_delays_p = {'L2_basket': 0.1, 'L2_pyramidal': 0.1,
'L5_basket': 1., 'L5_pyramidal': 1.}
net_init.add_evoked_drive('evprox1',
mu=5.6813,
sigma=20.3969,
numspikes=1,
location='proximal',
weights_ampa=weights_ampa_p1,
weights_nmda=weights_nmda_p1,
synaptic_delays=synaptic_delays_p)

# Distal
weights_ampa_d1 = {'L2_basket': 0.8037, 'L2_pyramidal': 0.5738,
'L5_pyramidal': 0.3626}
weights_nmda_d1 = {'L2_basket': 0.2492, 'L2_pyramidal': 0.6183,
'L5_pyramidal': 0.1121}
synaptic_delays_d1 = {'L2_basket': 0.1, 'L2_pyramidal': 0.1,
'L5_pyramidal': 0.1}
net_init.add_evoked_drive('evdist1',
mu=58.6539,
sigma=5.5810,
numspikes=1,
location='distal',
weights_ampa=weights_ampa_d1,
weights_nmda=weights_nmda_d1,
synaptic_delays=synaptic_delays_d1)

# Proximal 2
weights_ampa_p2 = {'L2_basket': 0.01, 'L2_pyramidal': 0.01, 'L5_basket': 0.01,
'L5_pyramidal': 0.01}
weights_nmda_p2 = {'L2_basket': 0.01, 'L2_pyramidal': 0.01, 'L5_basket': 0.01,
'L5_pyramidal': 0.01}
net_init.add_evoked_drive('evprox2',
mu=80,
sigma=1,
numspikes=1,
location='proximal',
weights_ampa=weights_ampa_p2,
weights_nmda=weights_nmda_p2,
synaptic_delays=synaptic_delays_p)

with MPIBackend(n_procs=n_procs, mpi_cmd='mpiexec'):
init_dpl = simulate_dipole(net_init, tstop=tstop, n_trials=1)[0]
init_dpl.scale(scale_factor)
init_dpl.smooth(smooth_window_len)

###############################################################################
# Now we start the optimization!

from hnn_core.optimization import optimize_evoked

with MPIBackend(n_procs=n_procs):
net_opt = optimize_evoked(net, tstop=tstop, n_trials=1,
target_dpl=exp_dpl, initial_dpl=initial_dpl,
scale_factor=scale_factor,
smooth_window_len=smooth_window_len)

###############################################################################
# Now, let's simulate the dipole with the optimized drive parameters.
with MPIBackend(n_procs=n_procs):
best_dpl = simulate_dipole(net_opt, tstop=tstop, n_trials=1)[0]
best_dpl = best_dpl.scale(scale_factor).smooth(smooth_window_len)
#
# First, we define a function that will tell the optimization routine how to
# modify the network drive parameters. The function will take in the Network
# object with no attached drives, and a dictionary of the paramters we wish to
# optimize.


def set_params(net, params):

# Proximal 1
net.add_evoked_drive('evprox1',
mu=5.6813,
sigma=20.3969,
numspikes=1,
location='proximal',
weights_ampa=weights_ampa_p1,
weights_nmda=weights_nmda_p1,
synaptic_delays=synaptic_delays_p)

# Distal
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

net = net.copy(reset_drives=False)
del net.external_drives['evprox2']
net.add_evoked_drive(...)

net.add_evoked_drive('evdist1',
mu=58.6539,
sigma=5.5810,
numspikes=1,
location='distal',
weights_ampa=weights_ampa_d1,
weights_nmda=weights_nmda_d1,
synaptic_delays=synaptic_delays_d1)

# Proximal 2
weights_ampa_p2 = {'L2_basket':
params['evprox2_ampa_L2_basket'],
'L2_pyramidal':
params['evprox2_ampa_L2_pyramidal'],
'L5_basket':
params['evprox2_ampa_L5_basket'],
'L5_pyramidal':
params['evprox2_ampa_L5_pyramidal']}
weights_nmda_p2 = {'L2_basket':
params['evprox2_nmda_L2_basket'],
'L2_pyramidal':
params['evprox2_nmda_L2_pyramidal'],
'L5_basket':
params['evprox2_nmda_L5_basket'],
'L5_pyramidal':
params['evprox2_nmda_L5_pyramidal']}
net.add_evoked_drive('evprox2',
mu=params['evprox2_mu'],
sigma=params['evprox2_sigma'],
numspikes=1,
location='proximal',
weights_ampa=weights_ampa_p2,
weights_nmda=weights_nmda_p2,
synaptic_delays=synaptic_delays_p)

###############################################################################
# Then, we can plot the pre- and post-optimization simulations alongside the
# experimental data.
# Then, we define the constraints.
#
# The constraints must be a dictionary of tuples where the first value in each
# tuple is the lower bound and the second value is the upper bound for the
# corresponding parameter.
#
# The following synaptic weight parameter ranges (units of micro-siemens)
# were chosen so as to keep the model in physiologically realistic regimes.

fig, axes = plt.subplots(2, 1, sharex=True, figsize=(6, 6))

exp_dpl.plot(ax=axes[0], layer='agg', show=False, color='tab:blue')
initial_dpl.plot(ax=axes[0], layer='agg', show=False, color='tab:orange')
best_dpl.plot(ax=axes[0], layer='agg', show=False, color='tab:green')
axes[0].legend(['experimental', 'initial', 'optimized'])
net_opt.cell_response.plot_spikes_hist(ax=axes[1])
constraints = dict({'evprox2_ampa_L2_basket': (0.01, 1.),
'evprox2_ampa_L2_pyramidal': (0.01, 1.),
'evprox2_ampa_L5_basket': (0.01, 1.),
'evprox2_ampa_L5_pyramidal': (0.01, 1.),
'evprox2_nmda_L2_basket': (0.01, 1.),
'evprox2_nmda_L2_pyramidal': (0.01, 1.),
'evprox2_nmda_L5_basket': (0.01, 1.),
'evprox2_nmda_L5_pyramidal': (0.01, 1.),
'evprox2_mu': (100., 120.),
'evprox2_sigma': (2., 30.)})

###############################################################################
# Finally, let's explore which parameters were changed to cause the
# improved dipole fit.
# As an example, we will look at the new dynamics of the first proximal drive,
# as well as the synaptic weight of its layer 2/3 pyramidal AMPA receptors.
# Now we define and fit the optimizer.

from hnn_core.network import pick_connection
from hnn_core.optimization import Optimizer

dynamics_opt = net_opt.external_drives['evdist1']['dynamics']
net = jones_2009_model()
optim = Optimizer(net, tstop=tstop, constraints=constraints,
set_params=set_params, scale_factor=scale_factor,
smooth_window_len=smooth_window_len, max_iter=40)
with MPIBackend(n_procs=n_procs, mpi_cmd='mpiexec'):
optim.fit(exp_dpl)

conn_indices = pick_connection(net=net_opt, src_gids='evprox1', target_gids='L2_pyramidal',
loc='proximal', receptor='ampa')
conn_idx = conn_indices[0]
weight_opt = net_opt.connectivity[conn_idx]
###############################################################################
# Finally, we can plot the experimental data alongside the post-optimization
# simulation dipole as well as the convergence plot.

print("Optimized dynamic properties: ", dynamics_opt)
print("\nOptimized weight: ", weight_opt)
with MPIBackend(n_procs=n_procs, mpi_cmd='mpiexec'):
opt_dpl = simulate_dipole(optim.net_, tstop=tstop, n_trials=1)[0]
opt_dpl.scale(scale_factor)
opt_dpl.smooth(smooth_window_len)

###############################################################################
# Let's compare to the initial dynamic properties and weight.
dynamics_initial = net.external_drives['evdist1']['dynamics']
fig, axes = plt.subplots(2, 1, sharex=True, figsize=(6, 6))

conn_indices = pick_connection(net=net, src_gids='evprox1', target_gids='L2_pyramidal',
loc='proximal', receptor='ampa')
conn_idx = conn_indices[0]
weight_initial = net.connectivity[conn_idx]
# plot original
exp_dpl.plot(ax=axes[0], layer='agg', show=False, color='tab:blue')
init_dpl.plot(ax=axes[0], layer='agg', show=False, color='tab:orange')
opt_dpl.plot(ax=axes[0], layer='agg', show=False, color='tab:green')
axes[0].legend(['experimental', 'initial', 'optimized'])
optim.net_.cell_response.plot_spikes_hist(ax=axes[1])

print("Initial dynamic properties: ", dynamics_initial)
print("\nInitial weight: ", weight_initial)
fig1 = optim.plot_convergence()
Loading
Loading