[go: up one dir, main page]

Jump to content

Markov blanket

From Wikipedia, the free encyclopedia
In a Bayesian network, the Markov boundary of node A includes its parents, children and the other parents of all of its children.

In statistics and machine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called a Markov blanket. If a Markov blanket is minimal, meaning that it cannot drop any variable without losing information, it is called a Markov boundary. Identifying a Markov blanket or a Markov boundary helps to extract useful features. The terms of Markov blanket and Markov boundary were coined by Judea Pearl in 1988.[1] A Markov blanket can be constituted by a set of Markov chains.

Markov blanket

[edit]

A Markov blanket of a random variable in a random variable set is any subset of , conditioned on which other variables are independent with :

It means that contains at least all the information one needs to infer , where the variables in are redundant.

In general, a given Markov blanket is not unique. Any set in that contains a Markov blanket is also a Markov blanket itself. Specifically, is a Markov blanket of in .

Markov boundary

[edit]

A Markov boundary of in is a subset of , such that itself is a Markov blanket of , but any proper subset of is not a Markov blanket of . In other words, a Markov boundary is a minimal Markov blanket.

The Markov boundary of a node in a Bayesian network is the set of nodes composed of 's parents, 's children, and 's children's other parents. In a Markov random field, the Markov boundary for a node is the set of its neighboring nodes. In a dependency network, the Markov boundary for a node is the set of its parents.

Uniqueness of Markov boundary

[edit]

The Markov boundary always exists. Under some mild conditions, the Markov boundary is unique. However, for most practical and theoretical scenarios multiple Markov boundaries may provide alternative solutions.[2] When there are multiple Markov boundaries, quantities measuring causal effect could fail.[3]

See also

[edit]

Notes

[edit]
  1. ^ Pearl, Judea (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Representation and Reasoning Series. San Mateo CA: Morgan Kaufmann. ISBN 0-934613-73-7.
  2. ^ Statnikov, Alexander; Lytkin, Nikita I.; Lemeire, Jan; Aliferis, Constantin F. (2013). "Algorithms for discovery of multiple Markov boundaries" (PDF). Journal of Machine Learning Research. 14: 499–566.
  3. ^ Wang, Yue; Wang, Linbo (2020). "Causal inference in degenerate systems: An impossibility result". Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics: 3383–3392.