The bootstrap method for Markov chains

Thumbnail Image
Date
1989
Authors
Fuh, Cheng-Der
Major Professor
Advisor
K. B. Athreya
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Authors
Research Projects
Organizational Units
Organizational Unit
Mathematics
Welcome to the exciting world of mathematics at Iowa State University. From cracking codes to modeling the spread of diseases, our program offers something for everyone. With a wide range of courses and research opportunities, you will have the chance to delve deep into the world of mathematics and discover your own unique talents and interests. Whether you dream of working for a top tech company, teaching at a prestigious university, or pursuing cutting-edge research, join us and discover the limitless potential of mathematics at Iowa State University!
Journal Issue
Is Version Of
Versions
Series
Department
Mathematics
Abstract

Let X[subscript]n; n≥ 0 be a homogeneous ergodic (positive recurrent, irreducible and aperiodic) Markov chain with countable state space S and transition probability matrix P = (p[subscript] ij). The problem of estimating P and the distribution of the hitting time T[delta] of a state [delta] arises in several areas of applied probability. A recent resampling technique called bootstrap, proposed by Efron (1) in 1979, has proved useful in applied Statistics and Probability; The application of the bootstrap method to the finite state Markov chain case originated in Kulperger and Prakasa Rao's paper (2);Suppose x = (x[subscript]0, x[subscript]1, ..., x[subscript] n) is a realization of the Markov chain X[subscript]n; n≥ 0. Let P[subscript] n be the maximum likelihood estimator of P based on the observed data x. The bootstrap method for estimating the sampling distribution H[subscript] n of R(x, P)≡ √n( P[subscript] n - P) can be described as follows: (1) Construct an estimate of the transition probability matrix P, based on the observed realization x, such as the maximum likelihood estimator P[subscript] n. (2) With P[subscript] n as its transition probability, generate a Markov chain realization of N[subscript] n steps x* = (x[subscript]sp0*, x[subscript]sp1*, ..., x[subscript]spN[subscript] n*). Call this the bootstrap sample, and let ~ P[subscript] n be the bootstrap maximum likelihood estimator of P[subscript] n. (3) Approximate the sampling distribution H[subscript] n of R ≡ R(x,P) by the conditional distribution H[subscript]spn* of R[superscript]* ≡ R(x*, P[subscript] n) ≡ √Nn(~ P[subscript] n -~ P[subscript] n) given x;Theoretical justification of the above method is to show that H[subscript]spn* is close to H[subscript] n asymptotically. It is well known that √n( P[subscript] n - P) ↦ N([underline]0,[sigma] P) in distribution, where [sigma][subscript] P is the variance covariance matrix and is continuous as a function of P with respect to the supremum norm on the class of k x k stochastic matrices. Thus, the bootstrap method will be justified if we show that H[subscript]spn* also goes to N([underline]0,[sigma][subscript] P) in distribution. The finite state space case was proved by Kulperger and Prakasa Rao (2). In this paper, we give an alternative proof of this result, and generalize it to the infinite state Markov chain;Next, since P[subscript] n goes to P, the above problem may be approached via the asymptotic behavior of a double array of Markov chains, where the transition probability matrix for the n[superscript]th row converges to a limit. This leads to our third main result which concerns the central limit theorem for a double array of Harris chains. ftn (1) B. Efron. Bootstrap method: another look at the jackknife. Ann. Statist. 7 (1979): 1-26. (2) R. J. Kulperger and B. L. S. Prakasa Rao. Bootstrapping a finite state Markov chain. To appear in Sankhya (1989).

Comments
Description
Keywords
Citation
Source
Copyright
Sun Jan 01 00:00:00 UTC 1989