You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now we choose the distribution of tuning curves somewhat arbitrarily by assuming a uniform distribution of intercepts. In higher dimensions, we've found improvements by having a uniform distribution over the proportion of the space that each neuron is active for (i.e. the CosineSimilarity(D+2) distribution).
However, there's also the approach of looking at it from an efficient coding point of view, and looking for tuning curves that optimize some sort of mutual information measure. For example, in https://www.biorxiv.org/content/10.1101/2022.11.03.515104v1.full.pdf they use Fischer information. In that particular paper, they make certain assumptions about the neuron model (sigmoid) and distributions of neuron thresholds (log-normal) in order to make this an analytically tractable optimization problem. But instead of going that far, we could just have a tool for measuring the Fischer information (or other information measures) given a set of tuning curves (and a distribution of represented values), both of which are already part of a nengo Ensemble. This could give a nice way of exploring different tuning curve distribution options.
The text was updated successfully, but these errors were encountered:
Right now we choose the distribution of tuning curves somewhat arbitrarily by assuming a uniform distribution of intercepts. In higher dimensions, we've found improvements by having a uniform distribution over the proportion of the space that each neuron is active for (i.e. the CosineSimilarity(D+2) distribution).
However, there's also the approach of looking at it from an efficient coding point of view, and looking for tuning curves that optimize some sort of mutual information measure. For example, in https://www.biorxiv.org/content/10.1101/2022.11.03.515104v1.full.pdf they use Fischer information. In that particular paper, they make certain assumptions about the neuron model (sigmoid) and distributions of neuron thresholds (log-normal) in order to make this an analytically tractable optimization problem. But instead of going that far, we could just have a tool for measuring the Fischer information (or other information measures) given a set of tuning curves (and a distribution of represented values), both of which are already part of a nengo Ensemble. This could give a nice way of exploring different tuning curve distribution options.
The text was updated successfully, but these errors were encountered: