MIT scientists have taken a step towards the future of robot control, natural language processing, and video processing by developing a new type of neural network that learns on the job, not just during its training phase. This new neural network, according to scientists, could help in making better decisions in autonomous driving and medical diagnosis.
The system is made from flexible algorithms, hence dubbed as ‘liquid networks.’ The algorithms change their underlying equations to adapt to new data inputs continuously.
Ramin Hasani, the study’s lead author, said, “Time series data are both ubiquitous and vital to our understanding the world. The real world is all about sequences. Even our perception — you do not perceive images, you perceive sequences of images. So, time-series data create our reality.”
“Let’s consider video processing, financial data, and medical diagnostic applications as examples of time series that are central to society. The vicissitudes of these ever-changing data streams can be unpredictable. Yet analyzing these data in real-time, and using them to anticipate future behavior, can boost emerging technologies like self-driving cars. So, we built an algorithm fit for the task.”
This Liquid machine-learning system adapts to the variability of real-world systems.
Hasani said, “The inspiration came from the microscopic nematode, C. elegans. It only has 302 neurons in its nervous system, yet it can generate unexpectedly complex dynamics.”
This neural network was coded carefully with extreme attention to how C. elegans neurons activate and communicate via electrical impulses. In the equations, Hasani used to structure his neural network, he allowed the parameters to change over time based on a nested set of differential equations.
Hasani says, “This flexibility is key. Most neural networks’ behavior is fixed after the training phase, which means they’re bad at adjusting to changes in the incoming data stream. the fluidity of his “liquid” network makes it more resilient to unexpected or noisy data, like if heavy rain obscures the view of a camera on a self-driving car. So, it’s more robust.”
“There’s another advantage of the network’s flexibility. It’s more interpretable.”
“The liquid network skirts the inscrutability common to other neural networks. By changing the representation of a neuron, you can explore some degrees of complexity you couldn’t explore otherwise.”
“Thanks to small number of highly expressive neurons, it’s easier to peer into the “black box” of the network’s decision making and diagnose why the network made a certain characterization.”
“The model itself is richer in terms of expressivity. That could help engineers understand and improve the liquid network’s performance.”
In tests, this new neural network excelled in a battery of tests. It edged out other state-of-the-art time series algorithms by a few percentage points in accurately predicting future values in datasets, ranging from atmospheric chemistry to traffic patterns.
Hasani plans to keep improving the system and ready it for industrial application.
This research was funded, in part, by Boeing, the National Science Foundation, the Austrian Science Fund, and Electronic Components and Systems for European Leadership.
- Ramin Hasani et al. Liquid Time-constant Networks. arXiv:2006.04439v4