Sound can be used in high-resolution imagery, study

Deep learning and metamaterials make the invisible visible.

Share

Seeing and perceiving an object whose size is a lot smaller than the light frequency is a difficult task for an observer placed in the far-field, because of as far as possible. Late advances in close and far-field microscopy have offered a few different ways to conquer this constraint. 

Be that as it may, they regularly utilize intrusive markers and intricate hardware with sophisticated image postprocessing.

In a new study, EPFL scientists have used a combination of metamaterials – specifically-engineered elements – and artificial intelligence. They have proven that a long, and therefore imprecise, wave (in this case, a sound wave) can elicit details that are 30 times smaller than its length. Through this study, they have shown that sound can be used in high-resolution imagery.

Originally, scientists wanted to bring two separate technologies together that have previously pushed the boundaries of imaging. One of these is metamaterials: purpose-built elements that can, for example, focus wavelengths precisely. They are known to lose their effectiveness by haphazardly absorbing signals in a way that makes them difficult to decipher. The other is artificial intelligence, or more specifically, neural networks that can quickly and efficiently process even the most complex information, although there is a learning curve.

To exceed the diffraction limit, scientists conducted an experiment, where they created a lattice of 64 miniature speakers, each of which could be activated according to the pixels in an image. Then, by using lattice, they reproduced sound images of numerals from 0 to 9 with exact spatial details; the images of digits fed into the lattice were drawn from a database of some 70,000 handwritten examples. 

Opposite the lattice, the specialists put a bag containing 39 Helmholtz resonators (10-cm spheres with a hole at one end) that shaped a metamaterial. The sound delivered by the lattice was sent by the metamaterial and caught by four mouthpieces placed a few meters away. Algorithms at that point decoded the sound recorded by the microphones to figure out how to perceive and redraw the original numeral pictures.

Romain Fleury said, “The team achieved a nearly 90% success rate with their experiment. By generating images with a resolution of only a few centimeters – using a sound wave whose length was approximately a meter – we moved well past the diffraction limit.”

“Also, the tendency of metamaterials to absorb signals, which had been considered a major disadvantage, turns out to be an advantage when neural networks are involved. We found that they work better when there is a great deal of absorption.”

“In the field of medical imaging, using long waves to seetinyl objects could be a breakthrough. Long waves mean that doctors can use much lower frequencies, resulting in acoustic imaging methods that are effective even through dense bone tissue. When it comes to imaging that uses electromagnetic waves, long waves are less hazardous to a patient’s health. For these types of applications, we wouldn’t train neural networks to recognize or reproduce numerals, but rather organic structures.”

Journal Reference:
  1. Far-field subwavelength acoustic imaging by deep learning. Bakhtiyar Orazbayev and Romain Fleury, Physical Review X, 7 August 2020. DOI: 10.1103/PhysRevX.10.031029

Trending