Researchers from The University of Texas at San Antonio UTSA, the University of Central Florida (UCF), the Air Force Research Laboratory (AFRL) and SRI International have developed a new method that improves how artificial intelligence learns to see.
Led by Sumit Jha, professor in the Department of Computer Science at UTSA, the team has changed the conventional approach employed in explaining machine learning decisions that relies on a single injection of noise into the input layer of a neural network. Jha’s new research is described in the paper ‘On Smoother Attributions using Neural Stochastic Differential Equations.’
The team shows that adding noise, also known as pixilation, along multiple layers of a network provides a more robust representation of an image that’s recognized by the AI and creates more robust explanations for AI decisions. This work aids in the development of what’s been called ‘explainable AI’ which seeks to enable high-assurance applications of AI.
“It’s about injecting noise into every layer,” Jha said. “The network is now forced to learn a more robust representation of the input in all of its internal layers. If every layer experiences more perturbations in every training, then the image representation will be more robust and you won’t see the AI fail just because you change a few pixels of the input image.”
Computer vision, the ability to recognize images, has many business applications. This type of machine learning can be employed in many industries including manufacturers using it it to detect defection rates.
Through deep learning, a computer is trained to perform behaviors, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through set equations, deep learning works within basic parameters about a data set and trains the computer to learn on its own by recognizing patterns using many layers of processing.
In most models that rely on neural ordinary differential equations (ODEs), a machine is trained with one input through one network, and then spreads through the hidden layers to create one response in the output layer. This team of UTSA, UCF, AFRL and SRI researchers use a more dynamic approach known as a stochastic differential equations (SDEs). Exploiting the connection between dynamical systems and show that neural SDEs lead to less noisy, visually sharper, and quantitatively robust attributions than those computed using neural ODEs.
The SDE approach learns not just from one image but from a set of nearby images due to the injection of the noise in multiple layers of the neural network. As more noise is injected, the machine will learn evolving approaches and find better ways to make explanations or attributions simply because the model created at the onset is based on evolving characteristics and/or the conditions of the image. It’s an improvement on several other attribution approaches including saliency maps and integrated gradients
“I am delighted to share the fantastic news that our paper on explainable AI has just been accepted at IJCAI,” Jha added. “This is a big opportunity for UTSA to be part of the global conversation on how a machine sees.”
For more information: www.utsa.edu
Tags: 3d vina, hiệu chuẩn, hiệu chuẩn thiết bị, máy đo 2d, máy đo 3d, máy đo cmm, Research Collaboration Improving Computer Vision for AI, sửa máy đo 2d, sửa máy đo 3d, sửa máy đo cmm