November 15, 2024

Brighton Journal

Complete News World

Unique visualizations of neural networks: machine decoder versus human sensory recognition

Unique visualizations of neural networks: machine decoder versus human sensory recognition

summary: A new study delves into the mysterious world of deep neural networks, discovering that while these models can recognize objects similar to human sensory systems, their recognition strategies differ from human perception. When networks are asked to generate stimuli similar to a given input, they often produce unrecognizable or distorted images and sounds.

This suggests that neural networks cultivate their own distinct “constants,” which are starkly different from human perceptual patterns. The research provides insights into evaluating models that mimic human sensory perceptions.

Key facts:

  1. Deep neural networks, when generating stimuli similar to a given input, often produce images or sounds that bear no resemblance to the target.
  2. Models appear to develop unique constants, different from human perceptual systems, that make them perceive stimuli differently than humans.
  3. Using competitive training can make model-generated stimuli more recognizable to humans, even though they are not identical to the original input.

source: Massachusetts Institute of Technology

Human sensory systems are very good at recognizing things we see or words we hear, even if the object is upside down or the word is spoken in a sound we’ve never heard before.

Computer models known as deep neural networks can be trained to do the same thing, correctly identifying an image of a dog regardless of the color of its fur, or identifying a word regardless of the tone of a speaker’s voice. However, a new study by neuroscientists at MIT finds that these models often respond in the same way to images or words that are unlike the target.

When these neural networks were used to generate an image or word that responded in the same way as a specific natural input, such as a picture of a bear, most of them generated images or sounds that human observers could not recognize. This suggests that these models build their own “invariants,” meaning that they respond in the same way to stimuli with very different features.

The findings offer a new way for researchers to evaluate how well these models mimic the organization of human sensory perception, says Josh McDermott, associate professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research and MIT’s Center for Brains. Minds and machines.

“This paper shows that you can use these models to extract abnormal signals that ultimately lead to a diagnosis of the representations in the model,” says McDermott, who is the study’s lead author. “This test should become part of a suite of tests that we use as a field to evaluate models.”

See also  SpaceX Dragon CRS-28 cargo ship opens from the International Space Station for return to Earth

Jenelle Feather Ph.D. ’22, now a research fellow at the Flatiron Institute’s Center for Computational Neuroscience, is lead author of the open access paper, which appears today in Normal neuroscience. Guillaume Leclerc, a graduate student at MIT, and Alexandre Madry, Cadence Professor of Design Systems for Computing at MIT, are also authors of the paper.

Different perceptions

In recent years, researchers have trained deep neural networks that can analyze millions of inputs (sounds or images) and learn common features that allow them to classify a target word or object with the same accuracy as humans do. These models are currently considered the leading models of biological sensory systems.

It is thought that when the human sensory system performs this type of categorization, it learns to ignore features that are not related to the basic identity of the object, such as the amount of light shining on it or the angle from which it is viewed. This is known as invariance, which means that objects are perceived as the same even if they show differences in those less important features.

“Classically, the way we thought about sensory systems is that they build invariants for all the sources of variation that different examples of the same thing can have,” Feather says. “The organism must perceive that they are the same thing, even though they appear as completely different sensory signals.”

The researchers wondered whether deep neural networks trained to perform classification tasks might evolve similar invariants. To try to answer this question, they used these models to generate stimuli that produced the same type of response within the model as an example stimulus that the researchers provided to the model.

They call these stimuli “typical measures,” reviving an idea from classic perception research where stimuli that are indistinguishable from a system can be used to diagnose its constants. The concept of analogies was originally developed in the study of human perception to describe colors that appear identical even though they are composed of different wavelengths of light.

To their surprise, the researchers found that most of the images and sounds produced in this way did not resemble the examples originally provided by the models. Most of the images were a jumble of random-looking pixels, and the sounds were like unintelligible noise. When the researchers showed the images to human observers, in most cases the humans did not categorize the images synthesized by the models into the same category as the original target example.

See also  NASA is working to recover a 4.5 billion-year-old asteroid sample from the container

“They’re actually completely unrecognizable to humans. They don’t look or sound natural, and they don’t have interpretable features that anyone could use to classify an object or word,” Feather says.

The results suggest that the models have somehow evolved their own constants that differ from those found in human cognitive systems. This causes models to perceive stimulus pairs as the same even though they are significantly different from humans.

Jurisprudential constants

The researchers found the same effect across many different vision and hearing paradigms. However, each of these models seems to develop its own unique constants. When gauges from one model were presented to another model, the gauges in the second model were not as recognizable as they were to human observers.

“The main takeaway from this is that these models seem to have what we call characteristic invariants,” McDermott says. “They have learned to be invariant to these specific dimensions of the stimulus field, which is specific to a specific model, so other models don’t have the same invariants.”

The researchers also found that they could stimulate the model’s metrics to be more recognizable to humans using an approach called adversarial training. This approach was originally developed to combat another limitation of object recognition models, which is that introducing small, almost imperceptible changes to an image can cause the model to misrecognize it.

The researchers found that competitive training, which involved including some of these slightly modified images in the training data, produced models whose metrics were more recognizable to humans, although still not as recognizable as the original stimuli. The researchers say this improvement appears to be independent of the effect of training on the models’ ability to resist hostile attacks.

“This type of training has a big effect, but we don’t really know why there’s that effect,” Feather says. “This is an area for future research.”

See also  Solving a mathematical puzzle about quarks and gluons in nuclear matter

Analyzing metrics produced by computational models could be a useful tool to help evaluate how closely a computational model mimics the basic organization of human perceptual systems, the researchers say.

“This is a behavioral test that you can perform on a particular model to see if the constants are shared between the model and human observers,” Feather says. “It can also be used to evaluate how specific the constants are within a given model, which may help reveal potential ways to improve our models in the future.”

Financing: The National Science Foundation, the National Institutes of Health, the Department of Energy Graduate Fellowship in Computational Science, and a Friends of the McGovern Institute Fellowship funded the research.

About Artificial Intelligence and Cognition Research News

author: Sarah McDonnell
source: Massachusetts Institute of Technology
communication: Sarah McDonnell – Massachusetts Institute of Technology
picture: Image credited to Neuroscience News

Original search: Open access.
Typical measurement tools reveal varying invariants between biological and artificial neural networks“By Josh McDermott et al. Normal neuroscience


a summary

Typical measurement tools reveal varying invariants between biological and artificial neural networks

Deep neural network models of sensory systems are often proposed to learn representational transformations with invariances such as those in the brain. To reveal these invariants, we created “model stimuli,” which are stimuli whose activations within the model phase match those in the natural stimulus.

Instruments for modern supervised and unsupervised neural network models of vision and hearing have often been completely unrecognizable to humans when generated from late model stages, suggesting differences between model and human invariants. Targeted model changes improved human recognition of model measurement tools but did not eliminate the overall human-model discrepancy.

The human recognizability of model metrics is well predicted by their recognizability by other models, suggesting that models contain distinct invariants in addition to those required by the task.

Metamer recognizability is decoupled from both traditional brain-based and weakly adversarial criteria, revealing a distinct failure mode of existing sensory models and providing a complementary criterion for model evaluation.