Pavlovian associative learning is a basic form of learning that shapes the behavior of humans and animals. However, training using the backpropagation method on “conventional” ANNs, especially in modern deep neural networks, is computationally and energy intensive.
New research based on Pavlovian learning with optical parallel processing demonstrates the exciting potential for various AI tasks.
Scientists from Oxford University‘s Department of Materials, Universities of Exeter, and Munster have developed an on-chip optical processor that can detect similarities in datasets up to 1,000 times faster than conventional machine learning algorithms running on electronic processors.
The Associative Monadic Learning Element (AMLE) uses a memory material that learns patterns to associate together similar features in datasets, simulating the conditional reflex observed by Pavlov in the case of a “match” rather than the backpropagation preferred by neural networks to “fine-tune” results.
To supervise the learning process, the AMLE inputs are paired with the appropriate outputs, and the memory material can be reset using light signals. After training with just five pairs of images, the AMLE was tested and found to distinguish between a cat and non-cat images.
The considerable performance capabilities of the new optical chip over a conventional electronic chip are down to two key differences in design:
- A unique network architecture incorporating associative learning as a building block rather than using neurons and a neural network.
- To increase computational speed, use ‘wavelength-division multiplexing’ to send multiple optical signals on different wavelengths on a single channel.
The chip technology employs light to transmit and receive data to maximize information density. Multiple signals at various wavelengths are supplied simultaneously for parallel processing, accelerating recognition task detection times. The computing speed rises with each wavelength.
Professor Wolfram Pernice, co-author from Münster University, explained: “The device naturally captures similarities in datasets while doing so in parallel using light to increase the overall computation speed – which can far exceed the capabilities of conventional electronic chips.”
Co-first author Professor Zengguang Cheng, now at Fudan University, said, “It is more efficient for problems that don’t need substantial analysis of highly complex features in the datasets. Many learning tasks are volume based and don’t have that level of complexity – in these cases, associative learning can complete the tasks more quickly and at a lower computational cost.”
Professor Harish Bhaskaran, who led the study, said, “It is increasingly evident that AI will be at the centre of many innovations we will witness in the coming phase of human history. This work paves the way towards realizing fast optical processors that capture data associations for particular types of AI computations, although there are still many exciting challenges ahead.”