Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Bayesian structure learning in graphical models
KTH, School of Engineering Sciences (SCI), Mathematics (Dept.), Mathematical Statistics.ORCID iD: 0000-0002-6886-5436
2016 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

This thesis consists of two papers studying structure learning in probabilistic graphical models for both undirected graphs anddirected acyclic graphs (DAGs).

Paper A, presents a novel family of graph theoretical algorithms, called the junction tree expanders, that incrementally construct junction trees for decomposable graphs. Due to its Markovian property, the junction tree expanders are shown to be suitable for proposal kernels in a sequential Monte Carlo (SMC) sampling scheme for approximating a graph posterior distribution. A simulation study is performed for the case of Gaussian decomposable graphical models showing efficiency of the suggested unified approach for both structural and parametric Bayesian inference.

Paper B, develops a novel prior distribution over DAGs with the ability to express prior knowledge in terms of graph layerings. In conjunction with the prior, a search and score algorithm based on the layering property of DAGs, is developed for performing structure learning in Bayesian networks. A simulation study shows that the search and score algorithm along with the prior has superior performance for learning graph with a clearly layered structure compared with other priors.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2016. , viii, 19 p.
Series
TRITA-MAT-A, 2015:16
Keyword [en]
Bayesian statistics, graphical models, Bayesian networks, Markov networks, structure learning
National Category
Probability Theory and Statistics
Research subject
Applied and Computational Mathematics
Identifiers
URN: urn:nbn:se:kth:diva-179852ISBN: 978-91-7595-832-3 (print)OAI: oai:DiVA.org:kth-179852DiVA: diva2:892063
Presentation
2016-01-28, Rum 3418, Instititionen för matematik, Lindstedtsvägen 25, Kungliga Tekniska Högskolan, Stockholm, 14:00 (English)
Opponent
Supervisors
Note

QC 20160111

Available from: 2016-01-11 Created: 2016-01-04 Last updated: 2016-01-11Bibliographically approved
List of papers
1. Bayesian structure learning in graphical models using sequential Monte Carlo
Open this publication in new window or tab >>Bayesian structure learning in graphical models using sequential Monte Carlo
(English)Manuscript (preprint) (Other academic)
Abstract [en]

In this paper we present a family of algorithms, the junction tree expanders, for expanding junction trees in the sense that the number of nodes in the underlying decomposable graph is increased by one. The family of junction tree expanders is equipped with a number of theoretical results including a characterization stating that every junction tree and consequently every de- composable graph can be constructed by iteratively using a junction tree expander. Further, an important feature of a stochastic implementation of a junction tree expander is the Markovian property inherent to the tree propagation dynamics. Using this property, a sequential Monte Carlo algorithm for approximating a probability distribution defined on the space of decompos- able graphs is developed with the junction tree expander as a proposal kernel. Specifically, we apply the sequential Monte Carlo algorithm for structure learning in decomposable Gaussian graphical models where the target distribution is a junction tree posterior distribution. In this setting, posterior parametric inference on the underlying decomposable graph is a direct by- product of the suggested methodology; working with the G-Wishart family of conjugate priors, we derive a closed form expression for the Bayesian estimator of the precision matrix of Gaus- sian graphical models Markov with respect to a decomposable graph. Performance accuracy of the graph and parameter estimators are illustrated through a collection of numerical examples demonstrating the feasibility of the suggested approach in high-dimensional domains. 

Keyword
Structure learning, Bayesian statistics, Gaussian graphical models
National Category
Probability Theory and Statistics
Research subject
Mathematics
Identifiers
urn:nbn:se:kth:diva-180326 (URN)
Note

QC 20160524

Available from: 2016-01-11 Created: 2016-01-11 Last updated: 2016-05-24Bibliographically approved
2. The Minimal Hoppe-Beta Prior Distribution for Directed Acyclic Graphs and Structure Learning
Open this publication in new window or tab >>The Minimal Hoppe-Beta Prior Distribution for Directed Acyclic Graphs and Structure Learning
(English)Manuscript (preprint) (Other academic)
Abstract [en]

The main contribution of this article is a new prior distribution over directed acyclic graphs intended for structured Bayesian networks, where the structure is given by an ordered block model. That is, the nodes of the graph are objects which fall into categories or blocks; the blocks have a natural ordering or ranking. The presence of a relationship between two objects is denoted by a directed edge, from the object of category of lower rank to the object of higher rank. The models considered here were introduced in Kemp et al. [7] for relational data and extended to multivariate data in Mansinghka et al. [12].

We consider the situation where the nodes of the graph represent random variables, whose joint probability distribution factorises along the DAG. We use a minimal layering of the DAG to express the prior. We describe Monte Carlo schemes, with a similar generative that was used for prior, for finding the optimal a posteriori structure given a data matrix and compare the performance with Mansinghka et al. and also with the uniform prior. 

Keyword
Graphical models, Bayesian networks, structure learning, DAG prior
National Category
Probability Theory and Statistics
Research subject
Applied and Computational Mathematics
Identifiers
urn:nbn:se:kth:diva-180327 (URN)
Note

QC 20160524

Available from: 2016-01-11 Created: 2016-01-11 Last updated: 2017-09-15Bibliographically approved

Open Access in DiVA

Thesis(643 kB)220 downloads
File information
File name FULLTEXT02.pdfFile size 643 kBChecksum SHA-512
5ee0328008fd69056db2d365b19c96783404b1599c4b102b693ab2bcd299697a0cf8997c219860b4a2a7b300306eac9549858ac87746f144d077f752425bb83d
Type fulltextMimetype application/pdf

Authority records BETA

Rios, Felix Leopoldo

Search in DiVA

By author/editor
Rios, Felix Leopoldo
By organisation
Mathematical Statistics
Probability Theory and Statistics

Search outside of DiVA

GoogleGoogle Scholar
Total: 220 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 941 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf