A machine learning approach to encode images like a retina

An actor-model framework.

Share

Neuroengineering faces a significant challenge: figuring out how to give artificial input to the senses that create the right perception. This is super important in neuroprosthetics, where they’re trying to restore sensory perception in people with disabilities. For instance, visual prosthetics need to shrink down images from a camera to fit the number of inputs and quality of the prosthesis. This is called artificial sensory encoding, and it’s a big deal for making prosthetic devices work well.

Demetri Psaltis from the Optics Lab, Christophe Moser from the Laboratory of Applied Photonics Devices, and Diego Ghezzi from the Hôpital ophtalmique Jules-Gonin – Fondation Asile des Aveugles joined forces to tackle the challenge of compressing image data for retinal prostheses using machine learning.

Traditionally, downsampling for retinal implants involves simple pixel averaging, a mathematical process without learning involved. However, the team discovered that applying a learning-based approach improved the results, especially when using an unconstrained neural network that mimicked aspects of retinal processing.

Their machine learning approach, an actor-model framework, excelled at finding the right balance for image contrast, similar to adjusting contrast in Photoshop. In this framework, two neural networks, the model and actor, worked together. The model acted as a digital replica of the retina, trained to generate a neural code similar to that of an actual retina when given a high-resolution image. The actor network was then trained to downsample the image to produce a neural code resembling the actual retina’s response to the original image.

Testing their method on a digital retina model and actual mouse retina samples revealed that the actor-model approach produced images that triggered a neuronal response more similar to the original image than traditional methods like pixel-averaging.

While using mouse retinas for testing posed challenges, Ghezzi emphasized the importance of validating their approach with actual biological samples.

The team sees potential for applying their framework beyond vision restoration, possibly even extending it to other sensory prostheses like those for hearing or limbs. They aim to explore broader image compression techniques and adapt their model for various brain regions, keeping an eye on future advancements in sensory restoration.

Journal Reference:

  1. Leong, F., Rahmani, B., Psaltis, D., et al. An actor-model framework for visual sensory encoding. Nat Commun 15, 808 (2024). DOI: 10.1038/s41467-024-45105-5

Trending