Markov network structure discovery using independence tests

dc.contributor.advisor Dimitris Margaritis
dc.contributor.author Bromberg, Facundo
dc.contributor.department Department of Computer Science
dc.date 2018-08-22T19:42:00.000
dc.date.accessioned 2020-06-30T07:45:34Z
dc.date.available 2020-06-30T07:45:34Z
dc.date.copyright Mon Jan 01 00:00:00 UTC 2007
dc.date.issued 2007-01-01
dc.description.abstract <p>We investigate efficient algorithms for learning the structure of a Markov network from data using the independence-based approach. Such algorithms conduct a series of conditional independence tests on data, successively restricting the set of possible structures until there is only a single structure consistent with the outcomes of the conditional independence tests executed (if possible). As Pearl has shown, the instances of the conditional independence relation in any domain are theoretically interdependent, made explicit in his well-known conditional independence axioms. The first couple of algorithms we discuss, GSMN and GSIMN, exploit Pearl's independence axioms to reduce the number of tests required to learn a Markov network. This is useful in domains where independence tests are expensive, such as cases of very large data sets or distributed data. Subsequently, we explore how these axioms can be exploited to "correct" the outcome of unreliable statistical independence tests, such as in applications where little data is available. We show how the problem of incorrect tests can be mapped to inference in inconsistent knowledge bases, a problem studied extensively in the field of non-monotonic logic. We present an algorithm for inferring independence values based on a sub-class of non-monotonic logics: the argumentation framework. Our results show the advantage of using our approach in the learning of structures, with improvements in the accuracy of learned networks of up to 20%. As an alternative to logic-based interdependence among independence tests, we also explore probabilistic interdependence. Our algorithm, called PFMN, takes a Bayesian particle filtering approach, using a population of Markov network structures to maintain the posterior probability distribution over them given the outcomes of the tests performed. The result is an approximate algorithm (due to the use of particle filtering) that is useful in domains where independence tests are expensive.</p>
dc.format.mimetype application/pdf
dc.identifier archive/lib.dr.iastate.edu/rtd/15575/
dc.identifier.articleid 16574
dc.identifier.contextkey 7030370
dc.identifier.doi https://doi.org/10.31274/rtd-180813-16791
dc.identifier.s3bucket isulib-bepress-aws-west
dc.identifier.submissionpath rtd/15575
dc.identifier.uri https://dr.lib.iastate.edu/handle/20.500.12876/69221
dc.language.iso en
dc.source.bitstream archive/lib.dr.iastate.edu/rtd/15575/3289377.PDF|||Fri Jan 14 20:43:11 UTC 2022
dc.subject.disciplines Artificial Intelligence and Robotics
dc.subject.disciplines Computer Sciences
dc.subject.keywords Computer science
dc.title Markov network structure discovery using independence tests
dc.type dissertation en_US
dc.type.genre dissertation en_US
dspace.entity.type Publication
relation.isOrgUnitOfPublication f7be4eb9-d1d0-4081-859b-b15cee251456
thesis.degree.level dissertation
thesis.degree.name Doctor of Philosophy
File
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
3289377.PDF
Size:
1.61 MB
Format:
Adobe Portable Document Format
Description: