Skip to content

Commit

Permalink
sliced inverse regression post
Browse files Browse the repository at this point in the history
  • Loading branch information
urbanophile committed Apr 18, 2024
1 parent 42377c0 commit 8198743
Show file tree
Hide file tree
Showing 2 changed files with 33 additions and 0 deletions.
32 changes: 32 additions & 0 deletions content/Blog/sliced_inverse_regression.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
Title: Sliced Inverse Regression
Author: Matt Gibson
Date: 2024-04-18

Sometimes, you read something unexpected. I was looking through [this paper on statistical perspectives on representation learning](https://arxiv.org/pdf/1911.11374.pdf) and came across this family of methods: "sliced inverse regression." Sliced inverse regression? What the heck? I think I have a reasonable general knowledge of statistical learning methods, so it's exciting to come across something new. [update: do not try to find a paper who's only keywords you remember are deep learning, representations and statistics.]

It's a sort of supervised dimensionality reduction developed by statisticians. Suppose you're trying to do a supervised regression problem in high dimensions `X^m` to predict `y`. So, the idea is to reduce `X^m` to `X^n` while preserving as much information about `y` as possible. This seems sensible, right? You'd usually use PCA for this or some sort of [stupid unsupervised dimensionality reduction](https://scikit-learn.org/stable/modules/random_projection.html). However, this is wasteful because you don't care about fully preserving the information in `X^m`; you only need the info relevant to `y`.

There are other sorts of things that you can do for this, e.g. [principal components regression](https://en.wikipedia.org/wiki/Principal_component_regression); you can even do this with kernel PCA + kernel regression, for which machine learning gives the name spectral regression. [According to Wikipedia](https://en.wikipedia.org/wiki/Principal_component_regression#Further_reading), people have been doing principal components regression in the econometric literature since at least the 1970s. Still, if you've tried to read the econometrics literature, you will understand why it had to be rediscovered.

Here's the [OG paper](https://www.tandfonline.com/doi/abs/10.1080/01621459.1991.10475035) (one thing I do like about some of these JASA papers is that they have invited commentaries. ). There's a Python package, [`Sliced`](https://joshloyal.github.io/sliced/notebooks/quickstart.html). There is not a heap going on, but he's implemented two variations on the algorithm:
* Sliced inverse regression (SIR)
* Sliced Average Variance Estimation (SAVE)
Also [there's a tutorial](https://joshloyal.github.io/sliced/auto_examples/plot_athletes.html#sphx-glr-auto-examples-plot-athletes-py) which might give you a better sense of what's going on.

The SIR regression model assumes the following setup:

```
y = f(b1*x, b2*x, ..., bk*x, e)
```

Here, we have unknown row vectors (b's), e is independent of x, and f is an arbitrary unknown function on `R^K`. If the assumptions are valid, we captured all the information we need to know about y by projecting the p-dimensional explanatory variable x onto the K dimensional subspace `(b1*x, ..., bk*x)`. We can reduce data by efficiently estimating the b's when K is small. Any linear combination of the b's is called a reduced direction, and the linear space B generated by the b's is called the reduced space for convenience.

The algorithm sliced inverse regression (SIR) to estimate the reduced directions using inverse regression. We start by standardizing the input variables (x) and getting a piece-wise estimate of the inverse regression curve E(x | y) (this is one-dimensional). This is done using the slice mean of x: the data is partitioned into several slices based on the value of y (like literally equally sized slices), and then the mean of x for each slice is found. Then, we apply PCA to these sliced means of x, use some distributional assumptions to discard components, and tada: we have an approximation to the K-dimensional subspace for tracking the inverse regression curve E(x | y). Finally, we get the SIR output by transforming the identified components back to the original scale with an affine retransformation.

In the Sliced Average Variance Estimation (SAVE) method, we divide the data into equal-sized groups and use eigenvectors corresponding to the larger eigenvalues to estimate the reduced subspace. This is done using the formula:
```
SAVE = ∑h (I - var(z | y ∈ I_h))
```
Here, I is the slice indicator. Weisberg and Cook make a few other important points regarding SAVE and its comparison with SIR. SIR is helpful in picking up on any dependence of y on x through `E(y | x)`, `var(y | x)`, or any other moment However, SIR has problems with symmetry, which SAVE does not have. Correspondingly, SAVE has problems with linearity, whereas SIR does not. Both methods yield practical graphical diagnostic information. Go read the [Cook and Weisberg 1991](https://www.jstor.org/stable/2290564) companion piece that has an excellent discussion. Also, there's an R package, [`dr`](https://cran.r-project.org/web/packages/dr/), which probably has more up-to-date references to the literature.

An addendum: I then thought lightning was striking twice because I came across the [`Reverse-Cuthill McKee algorithm`](https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee_algorithm). However, this algorithm is much less attractive because it uses a breadth-first search on the sparse matrix graph to [approximately solve the (NP-complete) bandwidth reduction problem](http://ciprian-zavoianu.blogspot.com/2009/01/project-bandwidth-reduction.html).
1 change: 1 addition & 0 deletions content/Blog/updating_website_thoughts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ Updating my website
:date: 2024-01-15
:authors: Matt Gibson


It's been a little while since I've updated my website.

Good:
Expand Down

0 comments on commit 8198743

Please sign in to comment.