Skip to content

Commit

Permalink
Merge pull request #420 from neuromatch/W1D5-superset
Browse files Browse the repository at this point in the history
W1D5 Post-Course Update (Superset)
  • Loading branch information
glibesyck authored Aug 21, 2024
2 parents 5ad4631 + 304e301 commit cf1b87a
Show file tree
Hide file tree
Showing 9 changed files with 612 additions and 177 deletions.
132 changes: 86 additions & 46 deletions tutorials/W1D5_Microcircuits/W1D5_Tutorial1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -386,7 +386,7 @@
" ax.legend(ncol = 2)\n",
" remove_edges(ax)\n",
" ax.set_xlim(left = 0, right = 100)\n",
" add_labels(ax, xlabel = '$\\\\tau$', ylabel = 'Count')\n",
" add_labels(ax, xlabel = 'Value', ylabel = 'Count')\n",
" plt.show()\n",
"\n",
"def plot_temp_diff_separate_histograms(signal, lags, lags_list, tau = True):\n",
Expand Down Expand Up @@ -656,46 +656,6 @@
" plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Helper functions\n",
"\n",
"def normalize(mat):\n",
" \"\"\"\n",
" Normalize input matrix from 0 to 255 values (in RGB range).\n",
"\n",
" Inputs:\n",
" - mat (np.ndarray): data to normalize.\n",
"\n",
" Outpus:\n",
" - (np.ndarray): normalized data.\n",
" \"\"\"\n",
" mat_norm = (mat - np.percentile(mat, 10))/(np.percentile(mat, 90) - np.percentile(mat, 10))\n",
" mat_norm = mat_norm*255\n",
" mat_norm[mat_norm > 255] = 255\n",
" mat_norm[mat_norm < 0] = 0\n",
" return mat_norm\n",
"\n",
"def lists2list(xss):\n",
" \"\"\"\n",
" Flatten a list of lists into a single list.\n",
"\n",
" Inputs:\n",
" - xss (list): list of lists. The list of lists to be flattened.\n",
"\n",
" Outputs:\n",
" - (list): The flattened list.\n",
" \"\"\"\n",
" return [x for xs in xss for x in xs]"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -738,6 +698,78 @@
" fid.write(r.content) # Write the downloaded content to a file"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Helper functions\n",
"\n",
"def normalize(mat):\n",
" \"\"\"\n",
" Normalize input matrix from 0 to 255 values (in RGB range).\n",
"\n",
" Inputs:\n",
" - mat (np.ndarray): data to normalize.\n",
"\n",
" Outpus:\n",
" - (np.ndarray): normalized data.\n",
" \"\"\"\n",
" mat_norm = (mat - np.percentile(mat, 10))/(np.percentile(mat, 90) - np.percentile(mat, 10))\n",
" mat_norm = mat_norm*255\n",
" mat_norm[mat_norm > 255] = 255\n",
" mat_norm[mat_norm < 0] = 0\n",
" return mat_norm\n",
"\n",
"def lists2list(xss):\n",
" \"\"\"\n",
" Flatten a list of lists into a single list.\n",
"\n",
" Inputs:\n",
" - xss (list): list of lists. The list of lists to be flattened.\n",
"\n",
" Outputs:\n",
" - (list): The flattened list.\n",
" \"\"\"\n",
" return [x for xs in xss for x in xs]\n",
"\n",
"# exercise solutions for correct plots\n",
"\n",
"def ReLU(x, theta = 0):\n",
" \"\"\"\n",
" Calculates ReLU function for the given level of theta.\n",
"\n",
" Inputs:\n",
" - x (np.ndarray): input data.\n",
" - theta (float, default = 0): threshold parameter.\n",
"\n",
" Outputs:\n",
" - thres_x (np.ndarray): filtered values.\n",
" \"\"\"\n",
"\n",
" thres_x = np.maximum(x - theta, 0)\n",
"\n",
" return thres_x\n",
"\n",
"sig = np.load('sig.npy')\n",
"temporal_diff = np.abs(np.diff(sig))\n",
"\n",
"num_taus = 10\n",
"taus = np.linspace(1, 91, num_taus).astype(int)\n",
"taus_list = [np.abs(sig[tau:] - sig[:-tau]) for tau in taus]\n",
"\n",
"T_ar = np.arange(len(sig))\n",
"\n",
"freqs = np.linspace(0.001, 1, 100)\n",
"set_sigs = [np.sin(T_ar*f) for f in freqs]\n",
"\n",
"reg = OrthogonalMatchingPursuit(fit_intercept = True, n_nonzero_coefs = 10).fit(np.vstack(set_sigs).T, sig)"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand Down Expand Up @@ -1055,6 +1087,11 @@
},
"outputs": [],
"source": [
"###################################################################\n",
"## Fill out the following then remove\n",
"raise NotImplementedError(\"Student exercise: complete `thres_x` array calculation as defined.\")\n",
"###################################################################\n",
"\n",
"def ReLU(x, theta = 0):\n",
" \"\"\"\n",
" Calculates ReLU function for the given level of theta.\n",
Expand All @@ -1066,10 +1103,6 @@
" Outputs:\n",
" - thres_x (np.ndarray): filtered values.\n",
" \"\"\"\n",
" ###################################################################\n",
" ## Fill out the following then remove\n",
" raise NotImplementedError(\"Student exercise: complete `thres_x` array calculation as defined.\")\n",
" ###################################################################\n",
"\n",
" thres_x = ...\n",
"\n",
Expand Down Expand Up @@ -1294,6 +1327,7 @@
"source": [
"\n",
"Denote the pixel value at time $t$ by $pixel_t$. Mathematically, we define the (absolute) temporal differences as\n",
"\n",
"$$\\Delta_t = |pixel_t - pixel_{t-1}|$$\n",
"\n",
"In code, define these absolute temporal differences to compute `temporal_diff` by applying `np.diff` on the signal `sig` and then applying `np.abs` to get absolute values."
Expand Down Expand Up @@ -1403,6 +1437,7 @@
},
"source": [
"What happens if we look at differences at longer delays $\\tau>1$?\n",
"\n",
"$$\\Delta_t(\\tau) = |pixel_t - pixel_{t-\\tau}|$$"
]
},
Expand Down Expand Up @@ -1569,7 +1604,7 @@
" plot_edge = lists2list( [[e1 , e2] for e1 , e2 in zip(plot_e1, plot_e2)])\n",
" ax.plot(plot_edge, np.repeat(filter, 2), alpha = 0.3, color = 'purple')\n",
" ax.scatter(plot_edge_mean, filter, color = 'purple')\n",
" add_labels(ax,ylabel = 'filter value', title = 'box filter', xlabel = 'Frame')"
" add_labels(ax,ylabel = 'Filter Value', title = 'Box Filter', xlabel = 'Value')"
]
},
{
Expand Down Expand Up @@ -1903,18 +1938,23 @@
},
"source": [
"The $\\ell_0$ pseudo-norm is defined as the number of non-zero features in the signal. Particularly, let $h \\in \\mathbb{R}^{J}$ be a vector with $J$ \"latent activity\" features. Then:\n",
"\n",
"$$\\|h\\|_0 = \\sum_{j = 1}^J \\mathbb{1}_{h_{j} \\neq 0}$$\n",
"\n",
"Hence, the $\\|\\ell\\|_0$ pseudo-norm can be used to promote sparsity by adding it to a cost function to \"punish\" the number of non-zero features.\n",
"\n",
"Let's assume that we have a simple linear model where we want to capture the observations $y$ using the linear model $D$ (which we will later call dictionary). $D$'s features (columns) can have sparse weights denoted by $h$. This is known as a generative model, as it generates the sensory input. \n",
"\n",
"For instance, in the brain, $D$ can represent a basis of neuronal networks while $h$ can capture their sparse time-changing contributions to the overall brain activity (e.g. see the dLDS model in [3]). \n",
"\n",
"Hence, we are looking for the weights $h$ under the assumption that:\n",
"\n",
"$$ y = Dh + \\epsilon$$\n",
"\n",
"where $\\epsilon$ is an *i.i.d* Gaussian noise with zero mean and std of $\\sigma_\\epsilon$, i.e., $\\epsilon \\sim \\mathcal{N}(0, \\sigma_\\epsilon^2)$.\n",
"\n",
"To enforce that $h$ is sparse, we penalize the number of non-zero features with penalty $\\lambda$. We thus want to solve the following minimization problem:\n",
"\n",
"$$\n",
"\\hat{h} = \\arg \\min_x \\|y - Dh \\|_2^2 + \\lambda \\|h\\|_0\n",
"$$\n",
Expand Down
Loading

0 comments on commit cf1b87a

Please sign in to comment.