Markov logic network

Markov logic network

A Markov logic network (or MLN) is a probabilistic logic which applies the ideas of a Markov network to first-order logic, enabling uncertain inference. Markov logic networks generalize first-order logic, in the sense that, in a certain limit, all unsatisfiable statements have a probability of zero, and all tautologies have probability one.

Contents

Description

Briefly, it is a collection of formulas from first order logic, to each of which is assigned a real number, the weight. Taken as a Markov network, the vertices of the network graph are atomic formulas, and the edges are the logical connectives used to construct the formula. Each formula is considered to be a clique, and the Markov blanket is the set of formulas in which a given atom appears. A potential function is associated to each formula, and takes the value of one when the formula is true, and zero when it is false. The potential function is combined with the weight to form the Gibbs measure and partition function for the Markov network.

The above definition glosses over a subtle point: atomic formulas do not have a truth value unless they are grounded and given an interpretation; that is, until they are ground atoms with a Herbrand interpretation. Thus, a Markov logic network becomes a Markov network only with respect to a specific grounding and interpretation; the resulting Markov network is called the ground Markov network. The vertices of the graph of the ground Markov network are the ground atoms. The size of the resulting Markov network thus depends strongly (exponentially) on the number of constants in the domain of discourse.

Inference

The goal of inference in a Markov logic network is to find the stationary distribution of the system, or one that is close to it; that this may be difficult or not always possible is illustrated by the richness of behaviour seen in the Ising model. As in a Markov network, the stationary distribution finds the most likely assignment of probabilities to the vertices of the graph; in this case, the vertices are the ground atoms of an interpretation. That is, the distribution indicates the probability of the truth or falsehood of each ground atom. Given the stationary distribution, one can then perform inference in the traditional statistical sense of conditional probability: obtain the probability P(A | B) that formula A holds, given that formula B is true.

Inference in MLNs can be performed using standard Markov network inference techniques over the minimal subset of the relevant Markov network required for answering the query. These techniques include Gibbs sampling, which is effective but may be excessively slow for large networks, belief propagation, or approximation via pseudolikelihood.

History

Markov logic networks have been studied extensively by members of the Statistical Relational Learning group at the University of Washington.

Resources

See also

External links


Wikimedia Foundation. 2010.

Игры ⚽ Нужно сделать НИР?

Look at other dictionaries:

  • Markov network — A Markov network, or Markov random field, is a model of the (full) joint probability distribution of a set mathcal{X} of random variables having the Markov property. A Markov network is similar to a Bayesian network in its representation of… …   Wikipedia

  • Markov random field — A Markov random field, Markov network or undirected graphical model is a set of variables having a Markov property described by an undirected graph. A Markov random field is similar to a Bayesian network in its representation of dependencies. It… …   Wikipedia

  • Network On Chip — or Network on a Chip (NoC or NOC) is an approach to designing the communication subsystem between IP cores in a System on a Chip (SoC). NoCs can span synchronous and asynchronous clock domains or use unclocked asynchronous logic. NoC applies… …   Wikipedia

  • Artificial Neural Network — Réseau de neurones Pour les articles homonymes, voir Réseau. Vue simplifiée d un réseau artificiel de neurones Un réseau de neurones artificiel est un modèle de c …   Wikipédia en Français

  • Neuronal network — Réseau de neurones Pour les articles homonymes, voir Réseau. Neurosciences …   Wikipédia en Français

  • Bayesian network — A Bayesian network, Bayes network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example …   Wikipedia

  • Clock distribution network — In a synchronous digital system, the clock signal is used to define a time reference for the movement of data within that system. The clock distribution network (or clock tree, when this network forms a tree) distributes the clock signal(s) from… …   Wikipedia

  • Artificial neural network — An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an… …   Wikipedia

  • Neural network — For other uses, see Neural network (disambiguation). Simplified view of a feedforward artificial neural network The term neural network was traditionally used to refer to a network or circuit of biological neurons.[1] The modern usage of the term …   Wikipedia

  • Fuzzy logic — is a form of multi valued logic derived from fuzzy set theory to deal with reasoning that is approximate rather than precise. Just as in fuzzy set theory the set membership values can range (inclusively) between 0 and 1, in fuzzy logic the degree …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”