He learns when the Neural Network of Artificial Intelligence should not be trusted

MIT researchers have developed a way for deep learning neural networks to quickly calculate the level of confidence in their output. Progress can improve safety and efficiency in AI-assisted decision making. Credit: MIT

Faster ways of calculating uncertainty when making decisions supported by AI can lead to safer results.

Increasingly, artificial intelligence systems, known as deep learning neural networks, are used to make key decisions for human health and safety, such as autonomous driving or medical diagnosis. These networks are good at knowing patterns in large and complex data sets to help them make decisions. How do we know they are correct? Alexander Amini and his colleagues WITH ONE and Harvard University wanted to know.

Neural networks have developed a fast way to crush data, and have developed not only prediction, but also the confidence level of the model based on the quality of the available data. Progress can save lives, as deep learning is already spreading in the real world. The level of certainty of a network can be the difference between an autonomous vehicle, “it’s clear to move forward from the intersection” and “it’s probably clear, so stop just in case.”

Current methods for calculating uncertainty for neural networks are computationally expensive and relatively slow in part-second decisions. But Amini’s approach, called “deep evidence regression,” speeds up the process and can lead to safer results. “Not only high-performance models, we need the ability to understand when we can’t trust those models,” says Amini, a team doctoral student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) at Professor Daniela Rus. .

“This idea is important and generally applicable. It can be used to evaluate products based on learned models. By calculating the uncertainty of a learned model, we also learn how many errors can be expected from the model and whether the missing data can improve the model, ”says Rus.

Amini will present the research at next month’s NeurIPS conference, along with Rus, Andrew and Erna Viterbi, professor of Electrical and Computer Engineering, director of CSAIL, and Stephen A. Schwarzman, vice dean of research at the University of Computer Science at MIT; and graduate students Wilko Schwarting of MIT and Ava Soleimany of MIT and Harvard.

Effective uncertainty

After a history of ups and downs, deep learning has shown excellent performance in a variety of tasks, in some cases surpassing humans. accuracy. Nowadays, deep learning seems to go where computers go. It feeds search engine results, social media feeds, and facial recognition. “We’ve had tremendous success using deep learning,” Amini says. “Neural networks are very good at 99 percent of the time to know the correct answer.” But 99 percent won’t cut their lives when they’re in line.

“One thing that has escaped researchers is their ability to know and tell when these models may be wrong,” says Amini. “We really care about that 1 percent of the time, and how we can detect these situations reliably and effectively.”

Neural networks can be massive, sometimes filled with billions of parameters. So there can be a large increase in computing to get the answer, let alone a confidence level. The study of uncertainty in neural networks is not new. But previous studies, based on in-depth learning at Bayes, have often focused on setting up or sampling a neural network to understand confidence. This process requires time and memory, a luxury that may not be present in high-speed traffic.

The researchers invented a way to calculate uncertainty from a single neural network only. The network was designed with a highly designed output, not only creating a decision, but also creating a new probabilistic distribution that includes evidence in favor of that decision. These distributions, called evidential distributions, directly capture the confidence the model has in its prediction. This includes the uncertainty that appears in the input data below, as well as in the final decision of the model. This distinction can ensure that uncertainty can be reduced by adapting the neural network itself or whether the input data is noisy.

Confidence check

To test the vision, the researchers began with a challenging computer vision task. They trained the neural network to study a monocular color image and calculate the depth value (i.e., distance from the camera lens) of each pixel. An autonomous vehicle can use similar calculations to calculate the proximity of pedestrians or another vehicle, which is not an easy task.

The performance of their network was comparable to that of previous flagship models, but it also gained the ability to estimate its uncertainty. As the researchers expected, the network projected a great deal of uncertainty for the pixels, which predicted the wrong depth. “It was highly calibrated to the mistakes that the network makes, which we believe was one of the most important things to judge the quality of a new uncertainty estimator,” says Amini.

To test the stress of calibration, the team showed that the network predicted greater uncertainty for “out-of-distribution” data, completely new types of images never found in training. After training the network in indoor scenes at home, they fed a range of outdoor driving scenes. The network constantly warned that the responses to the novels outside the scene were uncertain. The test highlighted the ability to mark networks when users do not need to place full confidence in decisions. In these cases, “if this is a health application, we may not trust the diagnosis provided by the model, and we are looking for a second opinion,” says Amini.

The network also knew when the photos were taken to cover up data manipulation attacks. In another experiment, researchers reinforced the noise level in a set of images fed to the network. The effect was subtle – barely noticeable to the human eye – but the network smelled those images, labeling its output with high levels of uncertainty. This ability to sound the alarm on false data can help detect and prevent counterattacks, which is a growing concern in times of deepfakes.

Deep evidence regression is “a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems,” says Raia Hadsell, an artificial intelligence researcher at DeepMind who was not involved with the work. “This is done in a new way that avoids confusing aspects of other approaches (e.g., sampling or sets), which makes it not only elegant, but also computationally more efficient, a winning combination.”

Deep evidence regression can increase security when making AI-assisted decisions. “It simply came to our notice then [neural network] models come out of the research lab and reach the real world in situations that affect humans that can have life-threatening effects, “says Amini.” Any user of the method, doctor or person in the passenger seat of the vehicle, should be aware of the risk or uncertainty associated with that decision. “In addition to quickly marking uncertainty, it also envisions being more conservative in making decisions in dangerous resolutions, like an autonomous vehicle approaching an intersection.

“Any field where machine learning will be expansive must have a reliable awareness of doubt,” he says.

This work was supported in part by the National Science Foundation and the Toyota Research Institute through the Toyota-CSAIL Joint Research Center.

Related articles



Please enter your comment!
Please enter your name here

Share article

Latest articles

What is cosmic acceleration and dark energy?

as if U.S. Department of Energy February 27, 2021 The universe is expanding, and it is always expanding a little faster. Scientists call this expansion...

Artificial “Magnetic Texture” Caused by Graphene – Can Create Powerful Quantum Computers

The image shows eight electrodes around a 20-nanometer-thick magnet (white rectangle). Graphene does not show a thickness of less than one nanometer and...

LSD provides potentially viable treatment for anxiety and other mental disorders

McGill is studying a step in understanding the mechanism of the impact of psychedelics on the brain and the potential for therapeutic use. Researchers at...

River colors are changing in the United States

1984 - 2018: Over the past 35 years, one-third of the major rivers in the United States have changed their predominant color, often due to...

The crystals found in the stomach of a fossil bird complicate the advice of his diet

Restoration of Bohayornitis Sulkavis, a close relative of Bohayornis Guo, insect hunting. Loan: © S. Abramowicz, Dinosaur Institute, Los Angeles County Museum of...


Subscribe to stay updated.