Jump to content

Probabilistic soft logic

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by LINQS-lab (talk | contribs) at 02:06, 11 July 2020 (Added an initial description of the semantics of PSL/HL-MRF.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
PSL
Developer(s)LINQS Lab
Initial releaseSeptember 23, 2011 (2011-09-23)
Stable release
2.2.2[1] / May 20, 2020 (2020-05-20)
Repositorygithub.com/linqs/psl
Written inJava
PlatformLinux, macOS, Windows
TypeMachine Learning Statistical relational learning
LicenseApache License 2.0
Websitepsl.linqs.org

Probabilistic Soft Logic (PSL) is a SRL framework for modeling probabilistic and relational domains [2]. It is applicable to a variety of machine learning problems, such as collective classification, entity resolution, link prediction, and ontology alignment. PSL combines the strengths of two powerful theories – first-order logic, with its ability to succinctly represent complex phenomena, and probabilistic graphical models, which capture the uncertainty and incompleteness inherent in real-world knowledge. More specifically, PSL uses "soft" logic as its logical component and Markov random fields as its statistical model. PSL provides sophisticated inference techniques for finding the most likely answer (i.e. the MAP state). The "softening" of the logical formulas allows us to cast the inference problem as a polynomial-time optimization, rather than a (much more difficult NP-hard) combinatorial one.

Description

In recent years there has been a rise in the approaches that combine graphical models and first-order logic to allow the development of complex probabilistic models with relational structures. A notable example of such approaches is Markov logic networks (MLNs).[3] Like MLNs PSL is a modelling language (with an accompanying implementation[4]) for learning and predicting in relational domains. Unlike MLNs, PSL uses soft truth values for predicates in an interval between [0,1]. This allows for the underlying inference to be solved quickly as a convex optimization problem. This is useful in problems such as collective classification, link prediction, social network modelling, and object identification/entity resolution/record linkage.

Semantics

HL-MRF

A PSL program defines a family of probabilistic probabilistic graphical models that are parameterized by data. More specifically, the family of graphical models it defines belongs to a special class of Markov random field known as a Hinge-Loss Markov Field (HL-MRF). A HL-MRF determines a density function over a set of continuous variables with joint domain using set of evidence , weights , and potential functions of the form where is a linear function and . The conditional distribution of given the observed data is defined as:

This density is a Logarithmically convex function, and thus finding a Maximum a posteriori estimation of the joint state of is a convex problem. This useful property allows inference in PSL to be solvable in polynomial-time.

See also

References

  1. ^ "PSL 2.2.2".
  2. ^ Bach, Stephen; Broecheler, Matthias; Huang, Bert; Getoor, Lise (2017). "Hinge-Loss Markov Random Fields and Probabilistic Soft Logic". Journal of Machine Learning Research. 18: 1–67.
  3. ^ Getoor, Lise; Taskar, Ben (12 Oct 2007). Introduction to Statistical Relational Learning. MIT Press. ISBN 0262072882.
  4. ^ "GitHub repository". Retrieved 26 March 2018.