Cosmological thinking meets neuroscience in a new theory of brain connections

Summary: A new mathematical model that identifies essential connections between neurons reveals that some neural networks in the brain are more essential than others.

Source: HHMI

After a career spent probing the mysteries of the universe, a senior scientist at the Janelia Research Campus is now exploring the mysteries of the human brain and developing new insights into the connections between brain cells.

Tirthabir Biswas had a successful career as a theoretical high-energy physicist when he came to Janelia for a sabbatical in 2018. Biswas still enjoyed tackling the problems of the universe, but the field had lost some of its his enthusiasm, with many major questions already answered.

“Neuroscience today is a lot like physics a hundred years ago, when physics had so much data and they didn’t know what was going on and that was exciting,” says Biswas, who is part of the Fitzgerald Lab.

“There’s a lot of information in neuroscience and a lot of data, and they include some specific big circuits, but there’s still no overall theoretical understanding, and there’s an opportunity to make a contribution.”

One of the biggest unanswered questions in neuroscience concerns the connections between brain cells. There are hundreds of times more connections in the human brain than there are stars in the Milky Way, but which brain cells are connected and why remains a mystery. This limits the ability of scientists to accurately address mental health issues and develop more accurate artificial intelligence.

The challenge of developing a mathematical theory to better understand these connections was a problem Janelia’s group leader James Fitzgerald first posed when Tirthabir Biswas arrived in his lab.

While Fitzgerald was out of town for a few days, Biswas sat down with pen and paper and used his background in high-dimensional geometry to think about the problem – a different approach than neuroscientists, who rely usually on calculus and algebra to solve mathematical problems. Within days, Biswas had major insight into the solution and approached Fitzgerald as soon as he returned.

“It seemed like it was a very difficult problem, so if I say, ‘I solved the problem,’ he’ll probably think I’m crazy,” Biswas recalled. “But I decided to say it anyway.” Fitzgerald was initially skeptical, but once Biswas finished presenting his work, they both realized he was onto something big.

“He had a really basic idea of ​​how these networks worked that people hadn’t had before,” Fitzgerald says. “This idea was made possible by interdisciplinary thinking. This idea was a stroke of genius that he had because of his way of thinking, and it just resulted in this new problem that he had never worked on before.

Biswas’ idea helped the team develop a new way to identify critical connections between brain cells, which was published June 29 in Physical examination research. By analyzing neural networks – mathematical models that mimic brain cells and their connections – they were able to understand that certain connections in the brain may be more essential than others.

Specifically, they looked at how these networks transform inputs into outputs. For example, an input could be a signal detected by the eye and the output could be the resulting brain activity. They looked at which connection patterns result in the same input-output transformation.

One of the biggest unanswered questions in neuroscience concerns the connections between brain cells. Image is in public domain

As expected, there were an infinite number of possible connections for each input-output combination. But they also found that certain connections appeared in each model, leading the team to suggest that these necessary connections could be present in real brains. A better understanding of which connections are more essential than others could lead to greater awareness of how real neural networks in the brain perform calculations.

The next step is for experimental neuroscientists to test this new mathematical theory to see if it can be used to make predictions about what happens in the brain. The theory has direct applications to Janelia’s efforts to map the fly brain connectome and record brain activity in zebrafish larvae. Understanding the underlying theoretical principles in these small animals can be used to understand the connections in the human brain, where recording such activity is not yet possible.

“What we’re trying to do is come up with theoretical ways to figure out what really matters and use these simple brains to test those theories,” Fitzgerald says. “Because they are verified in simple brains, the general theory can be used to think about how brain computation works in larger brains.”

About this neuroscience research news

Author: Nancy Bompey
Source: HHMI
Contact: Nanci Bompey – HHMI
Image: Image is in public domain

Original research: Access closed.
Geometric framework for predicting structure from function in neural networksby Tirthabir Biswas et al. Physical examination research


Summary

See also

This shows a hearing person holding a hearing aid

Geometric framework for predicting structure from function in neural networks

Neural computation in biological and artificial networks relies on the nonlinear summation of many inputs.

The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function, but the quantitative links between neural network structure and function are complex and subtle. For example, many networks may give rise to similar functional responses, and the same network may function differently depending on the context.

Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown.

Here, we introduce a geometric framework to identify the synaptic connections required by steady-state responses in recurrent linear threshold neural networks.

Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically compute the solution space of all anticipated and recurrent connectivity matrices that can generate the specified responses from the network inputs .

A noise-aware generalization further reveals that the geometry of the solution space can undergo topological transitions as the allowed error increases, which could provide insight into both neuroscience and machine learning.

We finally use this geometric characterization to derive conditions of certainty guaranteeing a non-zero synapse between neurons.

Our theoretical framework could thus be applied to neural activity data to make rigorous anatomical predictions that usually follow from the architecture of the model.

Sharon D. Cole