Skip to content

Commit

Permalink
Merge pull request #406 from neuromatch/W2D2-superset
Browse files Browse the repository at this point in the history
W2D2 Post-Course Update (Superset)
  • Loading branch information
glibesyck authored Aug 13, 2024
2 parents 605ee69 + 2af7d95 commit 6e72d70
Show file tree
Hide file tree
Showing 18 changed files with 185 additions and 183 deletions.
41 changes: 11 additions & 30 deletions tutorials/W2D2_NeuroSymbolicMethods/W2D2_Tutorial1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,15 @@
"feedback_prefix = \"W2D2_T1\""
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Notice that exactly the `neuromatch` branch of `sspspace` should be installed! Otherwise, some of the functionality won't work."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -539,34 +548,6 @@
"content_review(f\"{feedback_prefix}_concepts_as_high_dimensional_vectors\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"### Coding Exercise 1 Discussion\n",
"\n",
"1. Can you provide an intuitive reason, or rigorous mathematical proof, behind the fact that random high-dimensional vectors are approximately orthogonal? Here each of the components of the vectors is drawn from a zero mean, independent and identical distribution, e.g. a normal distribution $\\mathcal{N(0, 1)}$."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"#to_remove explanation\n",
"\n",
"\"\"\"\n",
"Discussion: Can you provide an intuitive reason, or rigorous mathematical proof, behind the fact that random high-dimensional vectors are approximately orthogonal? Here each of the components of the vectors is drawn from a independent and identical distribution, e.g. a normal distribution.\n",
"\n",
"Observe that as each of the components is independent and they are sampled from a distribution with zero mean, it means that the expected value of dot product E(x*y) = E(\\sum_i x_i * y_i) = (linearity of expectation) \\sum_i E(x_i * y_i) = (independence) \\sum_i (E(x_i) * E(y_i)) = 0.\n",
"\"\"\";"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand Down Expand Up @@ -1414,7 +1395,7 @@
" ###################################################################\n",
" sims = ...\n",
" max_sim = softmax(sims * self.temp, axis=1)\n",
" return sspspace.SSP(...)\n",
" return sspspace.SSP(...) #sspspace.SSP() wrapper is necessary for further bitwise comparison, it doesn't change the result vector\n",
"\n",
"\n",
"cleanup = Cleanup(vocab)\n",
Expand Down Expand Up @@ -1445,7 +1426,7 @@
" def __call__(self, x):\n",
" sims = x @ self.weights.T\n",
" max_sim = softmax(sims * self.temp, axis=1)\n",
" return sspspace.SSP(max_sim @ self.weights)\n",
" return sspspace.SSP(max_sim @ self.weights) #sspspace.SSP() wrapper is necessary for further bitwise comparison, it doesn't change the result vector\n",
"\n",
"\n",
"cleanup = Cleanup(vocab)\n",
Expand Down
66 changes: 37 additions & 29 deletions tutorials/W2D2_NeuroSymbolicMethods/W2D2_Tutorial2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,15 @@
"feedback_prefix = \"W2D2_T2\""
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Notice that exactly the `neuromatch` branch of `sspspace` should be installed! Otherwise, some of the functionality (like `optimize` parameter in the `DiscreteSPSpace` initialization) won't work."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -243,6 +252,20 @@
" plt.title(f'{ant_name}, not*{cons_name}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"execution": {}
},
"outputs": [],
"source": [
"# @title Helper functions\n",
"\n",
"action_names = ['red','blue','odd','even','green','prime','not*red','not*blue','not*odd','not*even','not*green','not*prime']"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -837,7 +860,7 @@
"execution": {}
},
"source": [
"We would like to find out Mexico's currency. Complete the code for constructing a `query` which will help us do that. Note that we are using a cleanup operation."
"We would like to find out Mexico's currency. Complete the code for constructing a `query` which will help us do that. Note that we are using a cleanup operation (feel free to remove it and compare the results)."
]
},
{
Expand All @@ -853,7 +876,7 @@
"raise NotImplementedError(\"Student exercise: complete `query` concept which will be similar to currency in Mexico.\")\n",
"###################################################################\n",
"\n",
"objs['query'] = cleanup(~(objs[...] * objs[...]) * objs['mexico'])"
"objs['query'] = cleanup(~objs[...] * objs[...] * objs['mexico'])"
]
},
{
Expand All @@ -866,7 +889,7 @@
"source": [
"#to_remove solution\n",
"\n",
"objs['query'] = cleanup(~(objs['canada'] * objs['dollar']) * objs['mexico'])"
"objs['query'] = cleanup(~objs['canada'] * objs['dollar'] * objs['mexico'])"
]
},
{
Expand Down Expand Up @@ -1086,7 +1109,7 @@
"* $R$ is the rule to be learned\n",
"* $A^{*}$ is the antecedent value bundled with $\\texttt{not}$ bound with the consequent value. This is because we are trying to learn the cards that can *violate* the rule.\n",
"\n",
"Rules themselves are going to be composed like the data structures representing different countries in the previous section. `ant,` `relation`, and `cons` are extra concepts that define the structure and which will bind to the specific instances. \n",
"Rules themselves are going to be composed like the data structures representing different countries in the previous section. `ant`, `relation`, and `cons` are extra concepts that define the structure and which will bind to the specific instances (think of them as anchor concepts which got bound to the specific instances). \n",
"\n",
"If we have a rule, $X \\implies Y$, then we would create the VSA representation:\n",
"\n",
Expand Down Expand Up @@ -1301,7 +1324,9 @@
"a_hat = sspspace.SSP(transform) * ...\n",
"\n",
"new_sims = action_space @ a_hat.T\n",
"y_hat = softmax(new_sims)"
"y_hat = softmax(new_sims)\n",
"\n",
"plot_choice([new_sims], [\"red\"], [\"prime\"], action_names)"
]
},
{
Expand All @@ -1320,18 +1345,8 @@
"a_hat = sspspace.SSP(transform) * new_rule\n",
"\n",
"new_sims = action_space @ a_hat.T\n",
"y_hat = softmax(new_sims)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"plt.figure(figsize=(7,5))\n",
"y_hat = softmax(new_sims)\n",
"\n",
"plot_choice([new_sims], [\"red\"], [\"prime\"], action_names)"
]
},
Expand Down Expand Up @@ -1370,7 +1385,9 @@
"\n",
"a_mlp = regr.predict(new_rule)\n",
"\n",
"mlp_sims = action_space @ a_mlp.T"
"mlp_sims = action_space @ a_mlp.T\n",
"\n",
"plot_choice([mlp_sims], [\"red\"], [\"prime\"], action_names)"
]
},
{
Expand All @@ -1396,17 +1413,8 @@
"\n",
"a_mlp = regr.predict(new_rule)\n",
"\n",
"mlp_sims = action_space @ a_mlp.T"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"mlp_sims = action_space @ a_mlp.T\n",
"\n",
"plot_choice([mlp_sims], [\"red\"], [\"prime\"], action_names)"
]
},
Expand Down
9 changes: 9 additions & 0 deletions tutorials/W2D2_NeuroSymbolicMethods/W2D2_Tutorial3.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,15 @@
"feedback_prefix = \"W2D2_T3\""
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Notice that exactly the `neuromatch` branch of `sspspace` should be installed! Otherwise, some of the functionality (like `optimize` parameter in the `DiscreteSPSpace` initialization) won't work."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down
41 changes: 11 additions & 30 deletions tutorials/W2D2_NeuroSymbolicMethods/instructor/W2D2_Tutorial1.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,15 @@
"feedback_prefix = \"W2D2_T1\""
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"Notice that exactly the `neuromatch` branch of `sspspace` should be installed! Otherwise, some of the functionality won't work."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -541,34 +550,6 @@
"content_review(f\"{feedback_prefix}_concepts_as_high_dimensional_vectors\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"execution": {}
},
"source": [
"### Coding Exercise 1 Discussion\n",
"\n",
"1. Can you provide an intuitive reason, or rigorous mathematical proof, behind the fact that random high-dimensional vectors are approximately orthogonal? Here each of the components of the vectors is drawn from a zero mean, independent and identical distribution, e.g. a normal distribution $\\mathcal{N(0, 1)}$."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {}
},
"outputs": [],
"source": [
"#to_remove explanation\n",
"\n",
"\"\"\"\n",
"Discussion: Can you provide an intuitive reason, or rigorous mathematical proof, behind the fact that random high-dimensional vectors are approximately orthogonal? Here each of the components of the vectors is drawn from a independent and identical distribution, e.g. a normal distribution.\n",
"\n",
"Observe that as each of the components is independent and they are sampled from a distribution with zero mean, it means that the expected value of dot product E(x*y) = E(\\sum_i x_i * y_i) = (linearity of expectation) \\sum_i E(x_i * y_i) = (independence) \\sum_i (E(x_i) * E(y_i)) = 0.\n",
"\"\"\";"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand Down Expand Up @@ -1424,7 +1405,7 @@
" ###################################################################\n",
" sims = ...\n",
" max_sim = softmax(sims * self.temp, axis=1)\n",
" return sspspace.SSP(...)\n",
" return sspspace.SSP(...) #sspspace.SSP() wrapper is necessary for further bitwise comparison, it doesn't change the result vector\n",
"\n",
"\n",
"cleanup = Cleanup(vocab)\n",
Expand Down Expand Up @@ -1457,7 +1438,7 @@
" def __call__(self, x):\n",
" sims = x @ self.weights.T\n",
" max_sim = softmax(sims * self.temp, axis=1)\n",
" return sspspace.SSP(max_sim @ self.weights)\n",
" return sspspace.SSP(max_sim @ self.weights) #sspspace.SSP() wrapper is necessary for further bitwise comparison, it doesn't change the result vector\n",
"\n",
"\n",
"cleanup = Cleanup(vocab)\n",
Expand Down
Loading

0 comments on commit 6e72d70

Please sign in to comment.