Learning the architectural features that predict functional similarity of neural networks

Abstract

The mapping of the wiring diagrams of neural circuits promises to allow us to link structure and function of neural networks. Current approaches to analyzing connectomes rely mainly on graph-theoretical tools, but these may downplay the complex nonlinear dynamics of single neurons and networks, and the way networks respond to their inputs. Here, we measure the functional similarity of simulated networks of neurons, by quantifying the similitude of their spiking patterns in response to the same stimuli. We find that common graph theory metrics convey little information about the similarity of networks’ responses. Instead, we learn a functional metric between networks based on their synaptic differences, and show that it accurately predicts the similarity of novel networks, for a wide range of stimuli. We then show that a sparse set of architectural features - the sum of synaptic inputs that each neuron receives and the sum of each neuron’s synaptic outputs - predicts the functional similarity of networks of up to 100 cells, with high accuracy. We thus suggest new architectural design principles that shape the function of neural networks, which conform with experimental evidence of homeostatic mechanisms.

Publication
Cold Spring Harbor Laboratory