Biclustering methods and a Bayesian approach to fitting Boltzmann machines in statistical learning

Thumbnail Image
Date
2014-01-01
Authors
Li, Jing
Major Professor
Advisor
Stephen B. Vardeman
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Abstract

This disertation focuses on two topics in Statistical Learning. One is biclustering, and the other is deep learning. The whole dissertation has three chapters, where Chapter 1 and 2 focus on biclustering; Chapter 3 focuses on the deep learning topic.

Biclustering is a Statistical Learning technique that simultaneously partitions the set of samples and the set of their attributes into homogeneous subsets. In Chapter 1, motivated by movie rating data, we firstly propose a Bayesian model and an MCMC algorithm for model estimation. Because this algorithm is too slow to be of practical use with current computation power, we next propose a simplified model and design a Genetic Algorithm for maximizing the likelihood function. This approach works well on a small data set. However, due to the NP-hard nature of the problem, both approaches fail to be practically useful with current computation power. Nonetheless, they provide principled ways of solving a biclustering problem for future use as computation power develops.

Also motivated by movie rating data, where missing values need to be addressed, in Chapter 2, we propose a new Prototype-based Biclustering method. We evaluate our method on test cases with various percentages with missing values in terms of the Rand Index between our result and the "true" partitions. In fact, our method has good performance on test cases even with a large missing value percentage. We further evaluate our method on a gene expression data set, that contains no missing values. Our method outperforms an existing biclustering method, i.e., Spectral Biclustering, using the Mean Squared Error criterion.

Deep Learning is a Statistical Learning topic, which involves a "deep" network architecture mimicing the information representation structure in human brain. In Chapter 3, motivated by a hand-written digit classification problem, we propose a Bayesian framework for fitting Boltzmann machine models. The proposed approach surpasses the previous available methods in terms of fitting because it provides a principled fitting method using an MCMC algorithm. The approach presented here also provides a reasonably effective way to extract features from multivariate data for use in classification.

Series Number
Journal Issue
Is Version Of
Versions
Series
Academic or Administrative Unit
Type
dissertation
Comments
Rights Statement
Copyright
Wed Jan 01 00:00:00 UTC 2014
Funding
Subject Categories
Keywords
Supplemental Resources
Source