当前位置:网站首页>Uncertainty reasoning: let the model know that it doesn't know

Uncertainty reasoning: let the model know that it doesn't know

2022-06-10 18:50:00 User 1908973

Deep learning has achieved great success , But in the explicable , Trustworthiness is not enough . It is important that the model contains confidence in the results of self reasoning , The model needs to inform the reasoning results of self uncertainty . If the model can know what it doesn't know , The model has the most basic consciousness , The model can give the inference result and the confidence of the result . This can be achieved through uncertainty reasoning

With uncertainty reasoning , The model will be calm when facing the data outside the distribution , It is the increase of uncertainty , The next step is how to improve the model's understanding of the new environment , Reduce uncertainty .

The following is an introduction to this paper and the subjective logic theory behind it :

2204 https://github.com/hanmenghan/TMC Including video

This paper combines uncertain credible reasoning with deep learning multi view fusion classification

Abstract :

The existing multi view classification algorithms focus on improving the accuracy by using different views , They are usually integrated into a common representation of subsequent tasks . Although effective , But it is also crucial to ensure the reliability of multi view integration and final decision , Especially for noisy 、 Corrupted and distributed data . Dynamically evaluating the reliability of each view of different samples can provide reliable integration . This can be achieved by uncertainty estimation . Consider this , We propose a novel multi view classification algorithm , It is called trusted multi view classification (TMC), By dynamically integrating different views at the evidence level , It provides a new paradigm for multi view learning . The proposed TMC Classification reliability can be improved by considering the evidence from each view . say concretely , We introduce a variational Dirichlet to characterize the distribution of class probabilities , Parameterize with evidence from different views , And with Dempster-Shafer Combination of theory . A unified learning framework learns about accurate uncertainty , So give the model reliability , And robustness in case of noise or damage . Theoretical and experimental results verify the effectiveness of the proposed model 、 accuracy 、 Robustness and reliability .

The theory behind the paper is very powerful , This is it. :

This book introduces the formalism of subjective logic , This will become an important tool for understanding and incorporating uncertainty into decision-making .

Subjective logic is a kind of uncertain probability logic , By the first jsang Put forward , A formal representation for handling trust . The evaluation and utilization of trust requires reasoning , therefore , Subjective logic has developed into the principle and method of probability reasoning under uncertain conditions . This book is the first to provide a comprehensive view of subjective logic and all its operations .

For decision makers , Whether the probability is certain or uncertain will make a big difference . for example , It is risky to make important decisions based on low confidence probability . Decision makers should demand additional evidence , In this way, the analyst can have more confidence in the conclusion probability of the hypothesis of interest .

Coin toss probability 0.5 Unlike ignorance , This is actually very enlightening

What is needed is a way to express a lack of confidence in probability . In subjective logic , Lack of confidence in probability manifests itself in the quality of uncertainty

More references to this book :

AGI Basics , Uncertainty reasoning , Subjective logic ppt1

AGI Basics , Uncertainty reasoning , Subjective logic ppt2

Rewrite clear Bayesian formulas with base ratios

原网站

版权声明
本文为[User 1908973]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/161/202206101749569464.html