Weight of Evidence, Explanation and Test Selection

Two of the more interesting features of rule-based expert systems are:
  1. their ability to "explain" their actions, and
  2. their ability to "select" the next test.
If we would like to use graphical models like expert systems, we should explore what we mean by test selection and explantaion.

Unfortunately, there are a lot of different ways we can explain the behavior of a model. I have chosen the following definition which observations (instantiated variables) are most influential in regards to our current hypothesis. This leads naturally to the definition of the weight of evidence.

Weight of Evidence

Suppose we have a target hypothesis H whose state we are trying to establish. For example, we may be trying to establish whether or not a patient has coronary artery disease. In this particular case, we may wish to know the influence that a finding E has on our information about H. I.J. Good (Many references) suggests the weight of evidence:

W(H:E)=log(P(E|H)/P(E|not H))

as a metric for the explanatory power of an observation. Note that the weight of evidence is a signed quantity, observations which increate the probability of H have positive weight of evidence, those which decrease the probability of H have negative weight of evidence.

Expected Weight of Evidence

We can also use weight of evidence to help guide the selection of the next test (variable to observe). Suppose that a test T has n possible outcomes, t1 through tn. The expected weight of evidence is the expected value (average) of the the weight of evidence under the assumption that the hypothesis is true, that is:
EW(H:E)=sum(W(H:ti)P(ti|H))
Expected weight of evidence is a quasi-utility: it plays the role that a formal utility might play in a more formal decision analysis. In particular, expected weight of evidence is a stand-in for value of information. Obviously, we want to select tests with high weight of evidence (or value of information if we have a full decision model). Unfortunately, it is seldom that straightforward in real world problems because we need to account for (1) the cost of testing and (2) the fact that tests come in bunches. Madigan and Almond go into more detail.

Explanation Tools

Graphical-Belief implements four kinds of explanation tools: Madigan, Mosurski and Almond [1997] describe these explanation tools in more detail. One intriguing technique they described which is not discussed here is the evidence chain, coloring the edges of a graph to represent strength of evidence flow. Although we experimented with those ideas in Graphical-Belief, ultimately they never proved as useful as the simple node coloring schemes.
Begin exploring explanation using Probability Node Coloring.

Return to the main example page.

Back to overview of Graphical-Belief.

View a list of Graphical-Belief in publications and downloadable technical reports.

The Graphical-Belief user interface is implemented in Garnet.

Get more information about obtaining Graphical-Belief (and why it is not generally available).

get the home page for Russell Almond , author of Graphical-Belief.

Click here to get to the home page for Insightful (the company that StatSci has eventually evolved into).


References

The ideas presented in this section are further developed in the following papers:

Madigan, D., K. Mosurski and R.G. Almond [1997]
``Explanation in Belief Networks.'' Journal of Computational and Graphical Statistics, 6, 160-181.
Madigan, D. and R.G. Almond [1995]
``Test Selection Strategies for Belief Networks'' StatSci Research Report 20. Presented at the 5th International Workshop on AI and Statistics Describes the use of weight of evidence to select tests.

I.J. Good develops the mathematics and philosophy of weight of evidence in many references including:

Good, I.J. [1950]
Probability and the Weighing of Evidence. Charles Griffin, London.
Good, I.J. [1952]
``Rational Decisions.'' JRSS Series B (14), 107--114.
Good, I.J. [1971]
``The probabilistic explication of information, evidence, surprise, causality, explanation and utility.'' In Proceedings of a Symposium on the Foundations of Statistical Inference, Holt, Rinehart and Winston. 108--141.
Good, I.J. and W. Card [1971]
``The diagnostic process with special reference to errors.'' Method of Inferenential Medicine. (10), 176--188.
Good, I.J. [1983]}
Good Thinking University of Minnesota Press.
Good, I.J. [1985]
``Weight of Evidence: A brief survey.'' In Bernardo, J. DeGroot, M. Lindley, D. and Smith, A. (eds.) Bayesian Statistics 2, North Holland, 249--269.

The following paper first introduces the idea of the evidence balance sheet in the context of a simple ("Idiot Bayes") model:

Spiegelhalter, D and R Knill-Jones [1984]
``Statistical and knowledge-based approaches to clinical decision support systems, with an application in gastroenterology.'' Journal of the Royal Statistical Society, (Series A), ( 147), 35--77.

John Tukey suggest in the discussion that the evidence balance sheet should be made "graphical". The evidence balance sheet shown is our implementation of that idea.


Russell Almond, <lastname> (at) acm.org
Last modified: Mon Aug 19 15:58:39 1996