Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compute mutual information for an Ensemble's tuning curves #112

Open
tcstewar opened this issue Feb 3, 2023 · 0 comments
Open

Compute mutual information for an Ensemble's tuning curves #112

tcstewar opened this issue Feb 3, 2023 · 0 comments

Comments

@tcstewar
Copy link
Member

tcstewar commented Feb 3, 2023

Right now we choose the distribution of tuning curves somewhat arbitrarily by assuming a uniform distribution of intercepts. In higher dimensions, we've found improvements by having a uniform distribution over the proportion of the space that each neuron is active for (i.e. the CosineSimilarity(D+2) distribution).

However, there's also the approach of looking at it from an efficient coding point of view, and looking for tuning curves that optimize some sort of mutual information measure. For example, in https://www.biorxiv.org/content/10.1101/2022.11.03.515104v1.full.pdf they use Fischer information. In that particular paper, they make certain assumptions about the neuron model (sigmoid) and distributions of neuron thresholds (log-normal) in order to make this an analytically tractable optimization problem. But instead of going that far, we could just have a tool for measuring the Fischer information (or other information measures) given a set of tuning curves (and a distribution of represented values), both of which are already part of a nengo Ensemble. This could give a nice way of exploring different tuning curve distribution options.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant