Jump to content

Probabilistic soft logic

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Eriq.augustine (talk | contribs) at 07:04, 27 March 2018 (Updated some old references to point to the current material (the LINQS lab has moved from UMD to UCSC). Touched up the description to more closely match the newest PSL journal article.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Probabilistic soft logic (PSL) is a SRL framework for collective, probabilistic reasoning in relational domains. PSL uses first order logic rules as a template language for graphical models over random variables with soft truth values from the interval [0,1][1].

Description

In recent years there has been a rise in the approaches that combine graphical models and first-order logic to allow the development of complex probabilistic models with relational structures. A notable example of such approaches is Markov logic networks (MLNs).[2] Like MLNs PSL is a modelling language (with an accompanying implementation[3]) for learning and predicting in relational domains. Unlike MLNs, PSL uses soft truth values for predicates in an interval between [0,1]. This allows for the underlying inference to be solved quickly as a convex optimization problem. This is useful in problems such as collective classification, link prediction, social network modelling, and object identification/entity resolution/record linkage.

See also

References

  1. ^ Bach, Stephen; Broecheler, Matthias; Huang, Bert; Getoor, Lise (2017). "Hinge-Loss Markov Random Fields and Probabilistic Soft Logic". Journal of Machine Learning Research (JMLR). 18: 1–67.
  2. ^ Getoor, Lise; Taskar, Ben (12 Oct 2007). Introduction to Statistical Relational Learning. MIT Press. ISBN 0262072882.
  3. ^ "GitHub repository". Retrieved 26 March 2018.