Uncertainty: Belief Functions

Belief Functions

To get a less precise version of probabilities, we relax the additivity condition. We define a new measure on the set of possible outcomes called the belief with the property that BEL(A or B) >= BEL(A) + BEL (B). With a few additional assumptions we wind up with belief functions. Shafer [1976] first laid out a theory of evidence based on these measures called belief function. Because Shafer's book is an extension of earlier work Dempster [1968], this is sometimes called the Dempster-Shafer theory. Shafer [1990] provides a review of recent developments in this theory and Almond [1995] provides an elementary exposition (with applications to graphical models).

Belief functions are like a random message, each being a set which contains the outcome. For a given set A, the mass m(A) is the probability that the set A will be the message. For a given set B, the belief BEL(B) is the probability that the random message will prove that the outcome must lie in B and the plausibility PL(B) is the probability that the random message will not disprove that the can lie in B. This is a quite flexible scheme which includes both probability and first order logic as special cases. The belief is a sort of lower bound on the probability and the plausibility is a sort of upper bound, but belief functions contain additional regularity assumptions which make them slightly different from true upper and lower probabilities. (Almond [1995] shows some circumstances in which they behave the same way as true upper and lower probabilities.)

Consider once again the check valve which has three possible failure states: stuck open, stuck closed or some other state (such as a rupture). For simplicity, we will use the word in italics to denote the state. If we know that the valve is failed, that knowledge could correspond to one of seven possible sets of failure states: {open}, {closed}, {other}, {open, closed}, {open, other}, {closed, other}, or any of the three. A belief function would allow us to ascribe a probability mass or mass to each of those sets of states.

The power of belief functions comes from their flexibility in modelling incomplete states of information such as might be associated with the failure states of the valve. An important example is the vacuous belief function created by placing probability mass 1.0 on the set {open, closed, other} (any state). This conveys the complete lack of information in the failure data about the relative frequencies of the various failure states. This model will result in conservative estimates, that is the upper bound on the other failure rate is likely to be too high and the lower bound too low.

As conceptually appealing as the vacuous model is, it is not very realistic. In the case of our valve, the probability of an other failure (specifically, a rupture) known to be smaller than the probability of either of the other two failure states. For example, from our experience we may decide that one of the two stuck states is between 9 and 99 times more likely than the other state. This results in the following mass function:

m({open, closed}) = .9
m({open, closed, other}) = .09
m({other}) = .01

This translates into an upper bound of 10% and a lower bound of 1% for the other state. Both the open and closed states have a belief and plausibility of 0 and .99 respectively, however, the set of states {open, closed} has belief .9 and plausibility .99. Thus this belief function tells us that the failure is very likely to be a stuck open or closed, but not which one.

We might further know that stuck open and closed failures were roughly equally likely. We need to quantify what we mean by this; it might translate into a judgement of the type that the probability of seeing open is no more than three times that of seeing closed and similarly the probability of seeing closed is no more than three times that of seeing open. Because this judgement says nothing about the state other we can express it like this:

m({open, other})=.25
m({open, closed, other})=.5
m({closed, other})=.25

This gives a belief of .25 and plausibility of .75 for the both the set {open, other} and the set {closed, other}.

Because the first judgement tells us about the relationship of other to the open and closed states, and the second judgement tells us about the relationship between the open and closed states, they are independent and we can combine them using Dempster's product-intersection rule. This results in the belief function represented by the following mass function:

m({open})=.225
m({closed})=.225
m({open, closed})=.45
m({open, other})=.0225
m({closed, other})=.0225
m({open, closed, other})=.045
m({other})=.01

This model is more complex than the probability model; it also more closely models our imprecision state of information about the relative rates of the various failure categories. Thus we trade complexity of modelling and representation for a more specific notation for ignorance. By building a second order model, we can also learn from our experience.


Belief Function Example Look at an example model using belief functions as the primary representation of uncertainty.

Uncertainty Return to the main discussion of uncertainty in Graphical-Belief.

Return to the main example page.

Back to overview of Graphical-Belief.

View a list of Graphical-Belief in publications and downloadable technical reports.

The Graphical-Belief user interface is implemented in Garnet.

Get more information about obtaining Graphical-Belief (and why it is not generally available).

get the home page for Russell Almond , author of Graphical-Belief.

Click here to get to the home page for Insightful (the company that StatSci has eventually evolved into).


Russell Almond, <lastname> (at) acm.org
Last modified: Fri Aug 16 18:52:31 1996