(This post also shared here on LinkedIn)
While the increased interest in neural networks, more specifically deep neural networks, and AI has greatly benefited various technological fields, these techniques are also expected to help further advance scientific fields, such as the neuroscience community.
The marriage between Neuroscience and AI, even after decades of research and exploration, is still quite ambiguous and overwhelming. The buzz words thrown around by the media tend to be attention-grabbing titles designed to manipulate human emotions into reactions. But if we stepped back for a minute and looked at this space at an eagle-eye view, we would get a glimpse of the current reality as well as the exciting future possibilities that this union might offer, away from the current dramas. To do that, we need to first categorize the different areas in this space.
The collaboration areas between Neural Oscillations (in neuroscience), a.k.a. brainwaves, and Neural Networks (in AI/ML/DL), can be categorized into two overarching area sets: The Theoretical and the Practical applications. In this edition, let us first look at the current and projected theoretical applications of neuroscience (more specifically in my niche, neural oscillations/brainwaves) and AI.
Unlike the industry, who primarily seeks applications of practical nature, such as a product or a service to be marketed and sold, traditionally the academic world seeks applications of theoretical nature. For example, in academia we are more interested in embarking on (research) projects that somehow help discover, explore, and explain anything and everything. We have curious scientists in literally any area that we can imagine, from researchers who dedicate their lives to studying the smallest to the largest entities in existence.
At its core, researchers and scientists work with sets of data that they tend to gather rigorously through particular research designs. We then tend to look for patterns, during the data analysis, that can be interpreted against set hypothesis and existing literature in the scientific body of knowledge. It would be safe to say that the key word here is patterns. To oversimplify, ‘looking for patterns’ can be thought of as one of the core elements that makes the case for the unification between neural networks and science.
While ‘looking for patterns’ is oversimplifying things, as I mentioned above, it is the stepping stone for the next set of benefits that neural networks have to offer the theoretical application side of the scientific world. The next step, and perhaps the most unique and mind blowing sets of benefits of neural networks, is its capability to autonomously adapt, learn, and explore datasets of choice. With correct set of configurations and model training, the neural networks can quite accurately automate the rigorous work of data synthesis and data analysis to help scientists not only analyze data much more efficiently, but also help interpret the data with increased levels of confidence.
For example, in the case of neural oscillations (brainwaves) in neural networks (AI), there are many theoretical applications that can benefit from the above-mentioned oversimplified benefit of neural networks, such as deep learning techniques, in analyzing raw datasets of electrical brain activities, gathered through EEG and/or LFP. In these cases, for example, the level of accuracy and efficiency that a specific deep learning technique, such as CNN, brings to the forefront of a scientist’s meticulous work of exploring and potentially discovering scientific work is priceless. In these cases, the scientist and the neural network become colleagues of some sort, collaborating with each other to help expand our scientific knowledge and discoveries of the brain.
Needless to say that a problem-free collaboration is quite unrealistic. There are many components that go into a satisfactory set up. Some of these prep-work steps are: Focused literature review, organized data sets, the right choice of the neural network technique, the right set up and configuration of the neural network, and much more. But once the set up is right and after a couple of satisfactory test runs have been completed, the researcher can safely keep the same set up and repeat the experiment over and over again, if that is the goal of the study.
As you can imagine, there are many technical skills that, in this case, a scientist would need to have in order to set up and run neural networks. Some of these technical skills are understanding deep learning techniques, such as CNN, knowing Python, understanding how to organize and set up datasets to feed into the neural networks, and much more. A scientist, as you may also imagine, traditionally, may not have the computer science skills needed. Therefore, as a workaround, a new sets of niche career has emerged, called a Computational Neuroscience, a.k.a. Theoretical Neuroscience.
A computational neuroscientist (also see the Neuromatch Academy) uses mathematical models to capture neurological features to help conduct a plethora of scientific research in the field of neuroscience. Since most traditional neuroscientists may not have, or even be interested in, the technical skills needed to use deep learning in their research design and analysis, they tend to partner with computational neuroscientists on research projects. In these cases, most likely, the neuroscientists have institutional laboratories where they design research, run the studies, and collect raw data from their experiments whereas the computational neuroscientists have access to computational tools needed to code in Python, train models, and run deep learning practices with the datasets given.
The partnerships among laboratory neuroscientists and computational neuroscientists has been growing dramatically over the past decade. If goals align, this type of partnership can be extremely beneficial to both parties where 1) the laboratory neuroscientist provides solid lab-grown data to the computational neuroscientist and 2) the computational neuroscientist provides the technical skills needed to set up and run deep learning techniques, such as CNNs. The win win situation, if research goals align, is that they both give each other something that the other partner does not have but need.
Partnering early on helps both parties not only to align on research goals but also to determine set rules for the type of datasets needed to help make the outcome as beneficial as possible. I say this because having a ‘good’ dataset is usually the number one complaints of a computational neuroscientist. Since a computational neuroscientist is fully dependent on the data and have no control over the datasets given to them, it is of outmost importances to start early on.
However, the reality is such that most computational neuroscientists, especially when they are students or early on in their academic careers, do not have the luxury of first-hand access to institutional laboratories. Thankfully, many institutions, such as accredited universities, or private labs publish their raw datasets, making it readily available to the public. However, using these datasets are sometimes a headache for the computational neuroscientists for the data-related reasons mentioned above.
Talking about raw datasets needed in feeding the neural networks, the first time that I ever publicly talked about this was in 2012 at the Smart Data Conference in San Jose, California. At the time, I was in roughly 2 years exploring and had noticed that most of the signals (in terms of data points) given to neural networks were oversimplified in terms of the neurology and biology to make sense of AI models in neuroscience. Gradually, over the years, I watched the neuroscience community develop various ways to discover and test the best neuroscience data types available, some of which are neural oscillation (brainwaves) related are documented in this book:
To sum up, one of the most theoretical applications of neuroscience in AI is using deep learning techniques, such as CNNs, in scientific research to assist the neuroscientist community uncover and discover our knowledge of the brain even further, and most likely, faster than ever before. Over the past decades, we have made tremendous developments in this field but we are still in the infancy age and have much more work to do.
If you are an aspiring student, there has never been a time like today where we can be anywhere in the world, with pretty much any background, and still be able to get into the field of computational neuroscience. If you have not checked it out, visit Neuromatch Academy and their YouTube channel (I took their 2021 summer course and have no affiliation with them).
What do you think? Can you think of any additional theoretical applications of neuroscience in AI? Let me know and I will add to the list. Thanks!
Disclaimer: For over a decade, I have been researching and exploring ways of Neuroscience (more specifically Neural Oscillations) in AI/ML/DL, Robotics, etc. All writings are my own in the old fashioned way. I do not use ChatGPT or the like to generate content. All views, research, and work my own. All information provided on Neuroscience & AI Newsletter is for general information purposes only and is the expressed opinion of myself, Nilo Sarraf, and not others. This includes (but is not limited to) my memberships, organizations, institutions, and/or employers.


Leave a comment