New types of neural networks can help make decisions in autonomous driving and medical diagnosis.
WITH ONE researchers have developed a type of neural network that learns at work, not just in its training phase. These flexible algorithms, called “liquid” networks, are constantly changing their underlying equations to constantly adapt to new data inputs. Progress can be made based on data streams that change over time, including those involved in medical diagnosis and self-management.
“This is the future path for robot control, natural language processing, video processing, any way of processing series data,” says research author Ramin Hasani. “The potential is really significant.”
The research will be presented at the AAAI Conference on Artificial Intelligence in February. In addition to Hasani, a postdoctoral fellow at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT authors include Daniela Rus, CSAIL Director, and Andrew and Erna Viterbi, Professor of Electrical and Computer Engineering, and Alexander Amini, PhD student. Other authors include Mathias Lechner of the Austrian Institute of Science and Technology and Radu Grosu of the Vienna University of Technology.
Time series data is ubiquitous and essential to understanding the world, according to Hasani. “The real world is about sequences. Our perception is also not perceiving images, you are perceiving sequences of images, ”he says. “So time series data really creates our reality.”
He points to video processing, economic data, and applications for medical diagnoses as examples of time series that are critical to society. Incidents in these changing data streams can be unpredictable. However, by analyzing these data in real time and using them to predict future behavior, the development of new technologies such as self-driving cars can be encouraged. So Hasani built the right algorithm for this task.
Hasani designed a neural network to adapt to the variability of real-world systems. Neural networks are algorithms that recognize patterns by examining a set of “training” examples. They are often said to mimic the processing pathways of the brain – Hasani was directly inspired by a microscopic nematode, C. elegans. “It only has 302 neurons in the nervous system,” he says, “yet it can cause complex unexpected dynamics.”
Hasani coded his neural network by focusing on how to focus C. elegans neurons are activated by electrical impulses and communicate with each other. In the equations he used to structure his neural network, he allowed the parameters to change over time based on the results of a nested set of differential equations.
This flexibility is key. The behavior of most neural networks is determined after the training phase, which means that they are poorly adapted to changes in the input data stream. Hasani said the flow of his “liquid” network makes him more resistant to unexpected or noisy data, as if the rain is hiding the view from a camera of a self-driving driver. “So it’s stronger,” he says.
There is another advantage of network flexibility: “It’s more interpretable.”
Hasani has said that his liquid network surrounds the usual infiltration of other neural networks. “It’s just a matter of changing the representation of the neuron,” as Hasani did with differential equations, “so you can explore some levels of complexity that you might not otherwise be able to explore.” Thanks to the small number of high-expression neurons in Hasan, it is easier to see the “black box” when making network decisions and to diagnose why the network has made certain characterizations.
“The model itself is richer in terms of expressiveness,” says Hasani. This can help engineers understand and improve the performance of the liquid network.
Hasani’s net prevailed in the test ball. He used some of the most advanced algorithms in the series to extract some of the top-of-the-range data to accurately predict future values in data sets, from atmospheric chemistry to traffic patterns. “In many applications, we see that performance is reliably high,” he says. In addition, the small size of the network completed the tests at no high computational cost. “Everyone is talking about expanding their network,” Hasani says. “We want to reduce less, to have fewer but richer nodes.”
Hasani plans to continue to improve the system and prepare it for industry applications. “We have more expressive neural networks inspired by nature. But this is just the beginning of the process, ”he says.“ The obvious question is how to extend this? We believe that this type of network can be a key element of future intelligence systems. “
Reference: “Liquid Time-constant Networks” by Ramin Hasani, Mathias Lechner, Alexander Amini, Daniela Rus and Radu Grosu, December 14, 2020, Computer Science> Machine Learning.
This research was partly funded by Boeing, the National Science Foundation, the Austrian Science Fund and the European Leadership Components and Electronic Systems.