Probability is a precise uncertain model.
Belief functions are an imprecise uncertain model.
Fuzzy logic is an imprecise model (without a formal concept of uncertainty, although some extensions like fuzzy probability encompass uncertainty). The fuzzy membership function is rather different from a probability measure, but the possibility function does behave more like a probability and is comparable. Although Graphical-Belief does not currently support possibilities, they do fit well in the generic valuation based framework of Graphical-Belief.
Although belief functions are a more flexible class of model than probabilities, probabilities still have certain advantages. In particular, they are more familiar (and hence easier to understand) and computationally simpler. Thus, it is unclear which is better for which application.
Fortunately, this does not matter in Graphical-Belief; it supports both. Actually, it does more than that, it uses a valuation-based inference engine. The theory of valuations (Shenoy and Shafer [1990]) provides the basis for that generic inference engine. A valuation is a function that assigns values to sets of outcomes. The fusion and propagation algorithm which is the heart of Graphical-Belief only uses two operations on valuations: combination and projection (onto a larger or smaller outcome space). In theory, Graphical-Belief could use any set function which obeys the rules of valuations as a representation of uncertainty.
In practice, Graphical-Belief uses object oriented programming techniques to implement the valuation-based protocol. Although Graphical-Belief currently only supports belief functions, all that is needed to implement a new kind of uncertainty is to write methods for the projection and combination operators. As Shenoy has already shown how utilities (measures of preference) and possibilities fit into the valuation based framework, it should be straightforward to add them to Graphical-Belief.
We began our discussion of the difference between probabilities and belief functions by creating a subtle distinction between the terms imprecision and uncertainty. To what extent is that distinction real and to what extent is it artificial? The thrust of the famous Cheeseman [1986] refutation of fuzzy logic is that imprecision can be modelled with uncertainty; that is, with a second order probability model over the unknown probability of the event.
Such second order models certainly have distinct advantages. In particular, using those models we can learn (improve the model) in the presence of those data. (The parameters and data example shows how this is implemented in Graphical-Belief.) As the amount of data grows, our uncertainty about the unknown parameter decrease until it was almost as if we know the exact value of the unknown parameter.
The problem with second order probability models is that there when there is little data, we must rely on subjective probability judgements to build them. Often we must assume a convenient model for ignorance, for example all values of the parameter are equally likely (on a certain scale). As these judgements are made before observing data, they are prior laws for the parameter. After observing data, we can update them to make posterior laws according to the straightforward principles of Bayesian statistics. Bayesian statisticians have found that if there is a great deal of data, inferences based on the posterior laws are not very sensitive to the prior laws. However, if there is little data, the choice of prior could have a large impact on the judgements made on the model.
One of the advantages of belief functions is that they have a convenient and unambiguous representation for ignorance. Dempster [1968] developed second order belief function models which in the presence of a lot of data look a lot like second order probability models, but with very little data produce upper and lower bounds instead of single probability estimates. Almond[1995] discusses using such models in a reliability context. Unfortunately, the models used in Almond[1995] rely on assumptions about the distributions of pivotal variables which are almost as strong and as difficult to justify as prior laws for parameters.
Despite the fact that prior laws for parameters can be difficult to obtain, second order models are clearly better than the first order equivalents. In situations where we have made an arbitrary modelling decision, we can study the sensitivity to that modelling decision. If the difference is small, then it probably didn't matter much. If the difference is large, we at least know that there is some controversy about our inferences and the size of the uncertainty or imprecision. We have already shown how to study sensitivity using Graphical-Belief, and to data about the parameters can be incorporated into the model. Finally, the knowledge engineering features of Graphical-Belief allow us to store and re-use difficult to obtain prior judgements in future modelling efforts.
The field of uncertainty modelling in artificial intelligence was young, it was known for its holy wars about the best representation of uncertainty and/or imprecision. These wars were filled seeming paradoxes in which a good model using one representation of uncertainty would be compared with a poor model using a different representation. Currently, most researchers recognize that each representation has its own advantages and disadvantages. Probabilities are perhaps the best understood representation, but they lack an unambiguous representation for ignorance. Belief functions are more flexible, but they have a higher computational cost and can lead to weak decisions.
The focus of the uncertainty in artificial intelligence community has shifted away from holy wars and towards trying to use the models to solve practical problems. One of the most used models is the probabilistic graphical model. This is because the model graph makes independence assumptions which were implicit explicit. As many of the paradoxes involving both probability and belief function involve questionable independence assumptions, we expect that belief function graphical models will have many of the same advantages as probabilistic graphical models. Because Graphical-Belief supports generic valuation-based models (including both probabilities and belief functions) we expect that it will be a valuable tool for exploring alternative representations of uncertainty.
One important difference between probabilities and belief function is that the latter can lead to weak decisions: models which are too weak to make a decision. To improve these models, we need to discover why they are making certain predictions and whether knowing the results of certain tests will improve our knowledge. The next example explores the fields of explanation and test selection.
For more discussion on these issues, check out the references for this section.
Weight of Evidence.
Graphical-Belief contains some powerful test
selection capabilities based on the idea of weight of evidence.
This example explores a few.
Return to
the main example page.
Back to overview of Graphical-Belief.
View a list
of Graphical-Belief in publications and downloadable technical
reports.
The Graphical-Belief user
interface is implemented in Garnet.
Get more
information about obtaining Graphical-Belief (and why
it is not generally available).
get
the home page for Russell Almond , author
of Graphical-Belief.
Click
here to get to the home page for Insightful (the company that StatSci
has eventually evolved into).