How does artificial intelligence make decisions?
Published time:2020 20ear 11 7on下8 18 18a20
This is a question that many of us encounter as children: "why do you do this?" As artificial intelligence (AI) begins to make more and more important decisions that affect our lives, we also hope these machines can answer this simple and profound question. After all, why should we believe in AI decisions?
This desire for a satisfactory explanation has prompted scientists at the National Institute of standards and Technology (NIST) to come up with a set of evaluation principles through which we can judge how artificial intelligence interprets its decisions. Their draft, the four principles of interpretable artificial intelligence (nistir 8312 draft), aims to stimulate a discussion about what we should expect from AI decision-making systems.
According to the author's point of view, four basic principles can be explained as follows: 1
The AI system should provide corresponding evidence or reason for all its decision-making results
The system should provide meaningful and understandable explanations for users
The interpretation should correctly reflect the decision-making process of AI system
The system only operates under the design conditions or when the system has sufficient confidence in its decision-making results. (the idea is that if the system doesn't have enough confidence in its decision-making results, it can't provide decisions to users.)
The report is part of NIST's help in developing "trusted" artificial intelligence systems. NIST's basic research aims to build trust in AI systems by understanding their theoretical functions and limitations, and improving their accuracy, reliability, security, stability, and interpretability, which will be the focus of this report.
The authors of the draft are seeking feedback from the public, because the subject is very broad, ranging from mechanical engineering and computer science to psychology and legal research, so they hope to have a broad discussion on it.
"AI is increasingly involved in high-risk decision-making, and no one wants machines to make decisions without knowing why," said Jonathan Phillips, an electronics engineer at NIST, one of the authors of the draft. But an explanation to the satisfaction of engineers may not apply to people with other professional backgrounds. Therefore, we hope to improve the draft from different perspectives and perspectives. "
Understanding the reasons behind the AI system making decisions can benefit everyone involved in the decision. For example, if AI helps improve loan approval decisions, then this understanding can help software designers improve the system. At the same time, applicants will also want to understand the reasoning process of artificial intelligence, perhaps to know why they are rejected by banks, or to help applicants maintain good credit rating after loan approval.
Phillips said that while these principles seem simple, users often have different criteria to judge whether AI has successfully met their needs. For example, the "explanation" in the second principle has multiple meanings and different meanings for different people, depending on the connection between their roles and the work of artificial intelligence.
Referring to the character in Star Trek, Phillips said: "think about how Kirk and Spock talk, and doctors who use artificial intelligence systems to help diagnose diseases may just need Spock to explain why this machine recommends one of the special treatments. Patients may not need to know the technical details, but they will want to know about Kirk's background and how Kirk relates to his life
Phillips and his coauthors linked the concept of interpretable AI decision-making to previous work on AI, and they also compared the interpretability of human beings to the demands of humans on themselves. Does our setting meet the human need for AI? After exploring the human decision support process using the four principles of the report, the author concludes that our setting has not been achieved.
Citing several examples, the researchers wrote: "human explanations of their choices and inferences are largely unreliable. Without awareness, people will integrate irrelevant information into various decisions, from personality judgment to jury verdict
However, this remarkable system of double standards in consciousness will eventually help us better understand our decisions and create a safer and more transparent world.
"With advances in AI interpretability research, we may find that some parts of AI systems are better able to meet social expectations and goals than humans themselves," Lipps said His past research has shown that collaboration between humans and AI can produce higher accuracy than either party works alone. "Understanding the interpretability of artificial intelligence systems and human decision-making behavior opens the door to the realization of continuous integration of human and AI advantages."
Phillips said the authors now hope that the comments they receive will further advance the discussion on the subject. "I don't think we have a clear idea of what the right benchmark is," he said. At the end of the day, we don't want to answer all the questions, but to try to enrich this area and make our discussions productive. "