Empirical Likelihood Based Methods in Statistics
(06 Jun - 1 Jul 2016)



~ Abstracts ~

 

An empirical likelihood method for geo-referenced spatial data
Soutir Bandyopadhyay, Lehigh University, USA


This paper develops empirical likelihood methodology for irregularly spaced spatial data in the frequency domain. Unlike the frequency domain empirical likelihood (FDEL) methodology for time series (on a regular grid), the formulation of the spatial FDEL needs special care due to lack of the usual orthogonality properties of the discrete Fourier transform for irregularly spaced data and due to presence of nontrivial bias in the periodogram under different spatial asymptotic structures. A spatial FDEL is formulated in the paper taking into account the effects of these factors. The main results of the paper show that Wilks' phenomenon holds for a scaled version of the logarithm of the proposed empirical likelihood ratio statistic in the sense that it is asymptotically distribution free and has a chi-squared limit. As a result, the proposed spatial FDEL method can be used to build nonparametric, asymptotically correct confidence regions and tests for covariance parameters that are defined through spectral estimating equations, for irregularly spaced spatial data. In comparison to the more common studentization approach, a major advantage of our method is that it does not require explicit estimation of the standard error of an estimator, which is itself a very difficult problem as the asymptotic variances of many common estimators depend on intricate interactions among several population quantities, including the spectral density of the spatial process, the spatial sampling density and the spatial asymptotic structure. Results from a numerical study are also reported to illustrate the methodology and its finite sample properties.

« Back...

 

Empirical likelihood based tests for stochastic ordering under biased sampling
Hsin-wen Chang, Institute of Statistical Science, Taiwan


In two-sample comparison problems it is often of interest to examine whether one distribution function majorizes the other, i.e. for the presence of stochastic ordering. This talk introduces a nonparametric test for stochastic ordering based on size-biased data, allowing the pattern of size bias to di er between the two samples. The test is formulated in terms of a maximally-selected local empirical likelihood statistic. A Gaussian multiplier bootstrap is devised to calibrate the test. A simulation study indicates that the proposed test outperforms an analogous Wald-type test, and that it provides substantially greater power than what is available when ignoring the sampling bias. The approach is illustrated using data on blood alcohol concentration and age of drivers involved in car accidents, in which size bias is present because the drunker drivers are more likely to be sampled. Further, younger drivers tend to be more a ected by alcohol, so when comparing with older drivers, the analysis is adjusted for di erences in the patterns of size bias.

Work done jointly with Dr. Hammou El Barmi and Ian W. McKeague

« Back...

 

Penalized high-dimensional empirical likelihood for joint variable and moment selections
Jinyuan Chang, The University of Melbourne, Australia


Statistical inferences in the framework of empirical likelihood (EL) are appealing and effective for studying model parameters, especially via estimating equations where useful moment conditions can be adaptively and flexibly incorporated. To solve the challenges for EL with high dimensionalities in both model parameters and estimating equations, we consider in this study a new penalized EL approach. Our approach applies two penalties respectively regularizing the magnitudes of the target parameters and the associated Lagrange multiplier in the optimization of EL. By penalizing the Lagrange multiplier to encourage its sparsity, we show that drastic dimension reduction can be achieved for EL. Most attractively, such a reduction in dimensionality of the Lagrange multiplier is actually equivalent to a selection among the high-dimensional moment conditions, resulting in a highly parsimonious and effective device for high-dimensional sparse model parameters. Allowing both numbers of the parameters and estimating functions growing exponentially with the sample size, our theory reveals that the resulted estimator of the penalized EL is sparse and consistent with asymptotically normally distributed nonzero components. Numerical simulations and a data analysis show that the proposed penalized EL works highly effectively in practice.

(Joint work with Cheng Yong Tang and Tong Tong Wu.)

« Back...

 

Small area model selection
Snigdhanshu Chatterjee, University of Minnesota, USA


Complex statistical models that have multiple sources of dependencies and variability in the observations are of primary importance in studying data from multiple disciplines. These include spatial, temporal, spatio-temporal, various mixed e ects and other statistical models. Of special importance among such models are those that are useful for studying problems where there is limited directly observed data, for example, as in small area models. In this talk we present a new resampling-based method that can be used for simultaneous variable selection and inference in several complex models, including small area and other mixed e ects models. Theoretical results justifying the proposed resampling schemes will be presented, followed by simulations and real data examples. This talk is based on research involving several students and collaborators from multiple institutions.

« Back...

 

Tutorials on an introduction to empirical likelihood
Sanjay Chaudhuri, National University of Singapore


Tutorial I
Title: Basic concepts and Testing of Hypothesis

Abstract: In this first tutorial we will introduce the concept of empirical likelihood and its connection to empirical distribution function. Some basic properties of the likelihood would be discussed. We will next move to empirical likelihood ratio tests and describe the asymptotic properties of the corresponding Wilk's statistic under various assumptions on the observations.

Tutorial II
Title: Maximum Empirical Likelihood Estimator

Abstract: We discuss parameter estimation by maximising empirical likelihood under constraints. Asymptotic properties of these estimates will be considered. We will also discuss uses and effects of non-parameter based constraints in the properties of the parameter estimates.

Tutorial III
Title: Some Applications

Abstract: We will discuss applications were empirical likelihood based methods have huge advantages over parametric likelihoods. Specific applications in demography, covariance matrix estimation, modelling in presence of missing values would be discussed.

Tutorial IV
Title: Emerging Area

Abstract: We consider some emerging areas in empirical likelihood research. This tutorial will be focused mainly on the use of empirical likelihood in Bayesian paradigm. We will first describe the use of empirical likelihood in Bayesian applications. Next we will discuss various open problems in the theoretical justification and computational implementation of Bayesian Empirical likelihood.

« Back...

 

Hamiltonian Monte Carlo In Bayesian empirical likelihood computation
Sanjay Chaudhuri, National University of Singapore


We consider Bayesian empirical likelihood estimation and develop an efficient Hamiltonian Monte Carlo method for sampling from the posterior distribution of the parameters of interest. The proposed method uses hitherto unknown properties of the gradient of the underlying log-empirical likelihood function. It is seen that these properties hold under minimal assumptions on the parameter space, prior density and the functions used in the estimating equations determining the empirical likelihood. We overcome major challenges posed by complex, non-convex boundaries of the support routinely observed for empirical likelihood which prevents efficient implementation of traditional Markov chain Monte Carlo methods like random walk Metropolis-Hastings etc. with or without parallel tempering. Our method employs finite number of estimating equations and observations but produces valid semi-parametric inference for a large class of statistical models including mixed effects models, generalised linear models, hierarchical Bayes models etc. A simulation study confirms that our proposed method converges quickly and draws samples from the posterior support efficiently. We further illustrate its utility through an analysis of a discrete data-set in small area estimation.

Keywords. Constrained convex optimisation; Empirical likelihood; Generalised linear models; Hamiltonian Monte Carlo; Mixed effect models; Score equations; Small area estimation; Unbiased estimating equations.

This is a joint work with Debashis Mondal, Oregon State University and Yin Teng, NCS, Singapore.

« Back...

 

A conditional empirical likelihood approach to combine sampling design
Sanjay Chaudhuri, National University of Singapore


Inclusion of available population level information in statistical modelling is known to produce more accurate estimates than those obtained only from the random samples. However, a fully parametric model which incorporates both these information may be computationally challenging to handle.

Empirical likelihood based methods can be used to combine these two kinds of information and estimate the model parameters in a computationally efficient way. In this article we consider methods to include sampling weights in an empirical likelihood based estimation procedure to augment population level information in sample-based statistical modeling. Our estimator uses conditional weights and is able to incorporate covariate information both through the weights and the usual estimating equations. We show that under usual assumptions, with population size increasing unbounded, the estimates are strongly consistent, asymptotically unbiased and normally distributed. Moreover, they are more efficient than other probability weighted analogues. Our framework provides additional justification for inverse probability weighted score estimators in terms of conditional empirical likelihood. We give an application to demographic hazard modeling by combining birth registration data with panel survey data to estimate annual first birth probabilities.

This work is joint with Mark Handcock, Department of Statistics, University of California, Los Angeles, USA and Michael Rendall, Department of Sociology, University of Maryland, College Park, USA.

Keywords and Phrases: Empirical likelihood; Complex surveys; Sampling design; Population level information.

« Back...

 

High dimensional empirical likelihood for general estimating equations with dependent data
Song Xi Chen, Peking University, China and Iowa State University, USA


We study the empirical likelihood inference for parameters defined by the generalized estimating equations with weakly dependent observations when the dimensions of the estimating equations and the parameters are both diverging. The impacts of the high dimensionality and the dependence on the consistency and the asymptotic normality of the maximum empirical likelihood estimator are investigated. We also establish the limiting distributions of the empirical likelihood ratio statistic and of the over-identification moment test statistic under high dimensionality with dependence. Our analysis covers a range of existing results while producing new at the interface between high dimensionality, increasing parameter and dependence.

« Back...

 

Quantile estimation based on clustered data
Jiahua Chen, University of British Columbia, Canada


We consider the quantile and quantile-function estimation problem based on clustered data from several connected populations. We envisage the situation where observations within a cluster are correlated but exchangeable. In the presence of clustering, an existing empirical likelihood (EL) based on the density ratio model can be interpreted as a composite EL. We find that the subsequent quantile estimations retain many of the nice asymptotic properties. We would, however, underestimate the uncertainty if the observations were regarded as independent. Therefore, the corresponding variance estimation and the construction of confidence intervals must be retooled. Instead of explicitly modelling the within-cluster correlation and developing a set of new inference procedures, we employ a cluster-based resampling method to address the undercoverage problem while retaining many of the existing inference tools. We obtain the asymptotic properties for the composite EL and prove the validity of the bootstrap confidence intervals. Simulation studies show that both the point estimators and confidence intervals have superior performance.

« Back...

 

Constrained maximum likelihood estimation for model calibration using summary-level information from external big data sources
Yi-Hau Chen, Institute of Statistical Science Academia Sinica, Taiwan


Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an "internal" study while utilizing summary-level information, such as information on parameters for reduced models, from an "external" big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood in-ference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature.

« Back...

 

Higher-order properties of Bayesian empirical likelihood
Malay Ghosh, University of Florida, USA


Empirical likelihood serves as a good nonparametric alternative to the traditional parametric likelihood. The former involves much less assumptions than the latter, but very often gets the same asymptotic inferential efficiency. While empirical likelihood has been studied quite extensively in the frequentist literature, the corresponding Bayesian literature is somewhat sparse. Bayesian methods hold promise, however, especially with the availability of historical information, which often can be used successfully for the construction of priors. In addition, Bayesian methods very often overcome the curse of dimensionality by providing suitable dimension reduction through judicious use of priors and analyzing data with the resultant posteriors. In this paper, we provide asymptotic expansion of posteriors for a very general class of priors along with the empirical likelihood and its variations, such as the exponentially tilted empirical likelihood and the Cressie-Read version of the empirical likelihood. Other than obtaining the celebrated Bernstein--von Mises theorem as a special case, our approach also aids in finding non-subjective priors based on empirical likelihood and its variations as mentioned above.

« Back...

 

Why (not) empirical likelihood?
Marian Grendar, Comenius University, Slovakia


Why EL? -- Empirical Likelihood (EL) is just one of the Generalized Minimum Contrast (GMC) estimators. Are all GMC estimators created equal? The answer can be obtained by embedding GMC into the Bayesian infinite dimensional framework. Then the Bayesian Law of Large Numbers singles out the EL estimator, as the only one in the GMC class that is consistent under misspecification.

Why not EL? -- EL intentionally ignores information about the support of the underlying random variable. This data-centric attitude is not for free, as EL may fail to exist in various ways: due to the convex hull restriction, zero likelihood problem, empty set problem. Several modifications of EL has been developed to cope with (some of) the problems.

EL vs. FL. -- Fisher's original definition of the likelihood reflects the finite precision of any measurement, because of which the continuous sample space has to be partitioned. In the iid case the Fisher Likelihood (FL) is just the kernel of the multinomial likelihood. Unlike EL, the FL estimator restricted to a convex, closed subset of the probability simplex always exists and it may put positive weight to unobserved outcomes. Consequently, the Fisher Likelihood Ratio may lead to dramatically different inferential conclusions than the Empirical Likelihood Ratio.

(Based on joint works with George Judge and Vladimír Spitalský.)

« Back...

 

Jackknife empirical likelihood
Bing-yi Jing, Hong Kong University of Science and Technology, Hong Kong


Empirical likelihood has been found very useful in many different occasions. However, when applied directly to some more complicated statistics such as U-statistics, it runs into serious computational difficulties. Jing, Yuan and Zhou (2009) introduce a jackknife empirical likelihood (JEL) method to over this problem, which is extremely simple to use in practice. The JEL has been shown to work in many different problems since then. In this talk, we will talk a fresh look at this problem and its various application.

« Back...

 

Empirical likelihood assessment of Markov random field models
Mark Kaiser, Iowa State University, USA


Markov random field models are formulated by specification of a full conditional distribution for each location In the field. The Markov assumption in such models reduces these to distributions conditioned on only a (usually small) set of values at what are called neighboring locations. In assessing the adequacy of a Markov random field model as a representation of the structure present in a set of data, both the distributional form chosen (e.g., normal versus lognormal) and the neighborhood structure selected (e.g., four-nearest versus eight nearest) may be important to include in the assessment. A neighborhood structure, which implies conditional independencies, is a particularly difficult aspect of a model to diagnose and assess. We apply a spatial block empirical likelihood method to tackle this problem. The method has proven effective in distinguishing between different potential neighborhood structures and a test procedure based on it exhibits good power. The procedure is illustrated with two problems, choosing between four-nearest and eight-nearest neighborhoods for a Gaussian model, and choosing between two co-dependent Markov random fields and one bivariate Markov random field for a problem involving two types of variables observed at the same set of locations. This latter problem is motivated by a problem of modeling the relations between wind speeds and power generation in a wind farm.

« Back...

 

Higher order properties of block empirical likelihood methods
Soumendra Nath Lahiri, NC State University, USA


In this talk, we investigate accuracy of different block empirical likelihood (BEL) methods for time series data. Contrary to the claims of Kitamura ( 1997), who "proved" Bertlett correctibity of the BELs, it is shown that with the standard choice of the block size, the rate of approximation to the limiting ChiSquared distribution is very slow - in fact slower than $ O( n^{ -1/2})$. We propose a blocks of blocks bootstrap calibration and show that under suitable regularity conditions, the resulting confidence intervals have an error of the order $O( n^{_1})$. Simulation results show marked improvement in coverage accuracy of the Bootstrap calibrated BEL.
Joint work with Dan Nordman and Arindam Chatterjee.

« Back...

 

An algorithmic approach to nonparametric function estimation with shape constraints
Rahul Mazumder, Massachusetts Institute of Technology, USA


I will consider the problem of least squares estimation of a multivariate nonparametric convex regression function. The associated quadratic program is difficult to scale to large problems, with off-the-shelf solvers. I will describe approaches to scale this problem up to large instances. I will describe variations of this setup, which allow the function estimates to have bounded Lipschitz parameters, smoothness properties, etc. I will then describe a framework that allows to do variable selection: what if the underlying convex function depends only upon a subset of the covariates? I will describe how modern discrete optimization methods can be used to solve this problem exactly.

« Back...

 

Singularity structures and parameter estimation in mixture models
Long Nguyen, University of Michigan, USA


Mixture and hierarchical models are popular and powerful tools for modeling complex and heterogeneous data. Parameter estimation behaviors of mixture and hierarchical models, however, are difficult to fathom from both statistical and computational viewpoints. This is due to the observation in many instances that the more complex a latent variable model, the more weakly identifiable the model parameters. This can be understood by looking into the singularity structure of
the model's parameter space. We will describe the very rich and multi-level structure of singularities that are present in several well-known mixture models, such as mixtures of normals, gamma, skewnormals, and the impact they have on convergence rates of parameter estimation in such models. This work is joint with Nhat Ho.

« Back...

 

Self-concordance for empirical likelihood
Art Owen, Stanford University, USA


Statistical computation usually comes down to either Monte Carlo sampling or optimization. In empirical likelihood, it is typically the latter. The empirical likelihood for a vector mean is a convex optimization problem, and as such it is usually feasible. More general problems are then handled via estimating equations whose mean is zero. In 2012, Dan Yang and Dylan Small found a numerically challenging vector mean problem arising from instrumental variables for which this author's EL code failed to converge. This talk exhibits an approach to empirical likelihood computation via a self-concordant convex function. Self-concordance keeps the Hessian from behaving too badly and then Newton's method with backtracking is mathematically gauranteed to converge to the global optimum. A Bartlett correctable polynomial approximation for the empirical log likelihood, due to Corcoran, is also self-concordant. It remains challenging to pro le out nuisance parameters, but given a fully speci ed parameter vector, computing the empirical likelihood is numerically very robust.

« Back...

 

Differential equations and empirical likelihood
Debashis Paul, University of California, Davis, USA


Estimation of deterministic differential equations from noisy and discretely observed data has a long history in statistics. One key concern in fitting ODE models to observed data is the amount of computation required to obtain the fit if one takes the traditional nonlinear regression approach which involves solving an initial value problem at each iteration. A popular class of methods that avoid this computational bottleneck is categorized as "two-stage approach", whereby a nonparametric estimate of the trajectory is first obtained, followed by a linear or nonlinear fit of the ODE parameters, typically through the least squares method where the estimated trajectory acts as an input. Variants of this method, based on splines and kernel smoothers, have been developed, as well as hybrid methods that balance between fidelity to the data and the ODE by using appropriate penalization schemes. In spite of their appeal and success, choice of smoother and penalty parameters as well as robustness to the noise characteristics remain important practical questions. Empirical likelihood framework provides a
exible and efficient way of addressing the problem of finding a balance between smoothing of trajectories and estimation of parameters in an automatic way. In this talk, a new method for estimating ODE based on this framework will be presented. The method is seen to adapt to the noise characteristics, while having the ability to incorporate natural constraints on the system. Some related developments in this direction as well as outstanding issues will be discussed.

This is a joint work with Sanjay Chaudhuri (National University of Singapore) and Xiaoyan
Liu (University of California, Davis).

« Back...

 

Statistical iference under nonignorable sampling and nonresponse. An empirical likelihood approach
Danny Pfeffermann, Hebrew University of Jerusalem, Israel
University of Southampton, UK



When the sample selection probabilities and/or the response propensities are related to the values of the inference target variable, the distribution of the observed target variable may be very different from the distribution in the population from which the sample is taken. Ignoring the sample selection or the response mechanism in this case may result in highly biased inference. Accounting for sample selection bias is relatively simple because the sample selection probabilities are usually known, and several approaches have been proposed in the literature to deal with this problem.

On the other hand, accounting for a nonignorable response mechanism is much harder since the response probabilities are generally unknown, requiring to assume some structure on the response mechanism.

In this presentation we develop a new approach for modelling complex survey data, which accounts simultaneously for nonignorable sampling and nonresponse. Our approach combines the nonparametric empirical likelihood with a parametric model for the response probabilities, which contains the outcome target variable as one of the covariates. The sampling weights also feature in the inference process after appropriate smoothing. We discuss estimation issues and propose a simple test statistic for testing the model. Combining the population model with the sample selection probabilities and the model for the response probabilities defines the model holding for the missing data and enables imputing the missing sample data from this model. Simulation results illustrate the performance of the proposed approach.

« Back...

 

Tests of additional conditional moment restrictions
Richard Smith, University of Cambridge, UK


The primary focus of this article is the provision of tests for the validity of a set of conditional moment constraints additional to those defining the maintained hypothesis that are relevant for either independent cross-sectional data or short panel data with independent cross-sections contexts. The point of departure and principal contribution of the paper is the explicit and full incorporation of the conditional moment information defining the maintained hypothesis in the design of the test statistics. Thus, the approach mirrors that of the classical parametric likelihood setting by defining restricted tests in contradistinction to unrestricted tests that partially or completely fail to incorporate the maintained information in their formulation. The framework is quite general allowing the parameters defining the additional and maintained conditional moment restrictions to differ and permitting the conditioning variates to differ likewise. GMM and generalized empirical likelihood test statistics are suggested. The asymptotic properties of the statistics are described under both null hypothesis and a suitable sequence of local alternatives. An extensive set of simulation experiments explores the practical efficacy of the various test statistics in terms of empirical size and size-adjusted power confirming the superiority of restricted over unrestricted tests. A number of restricted tests possess both sufficiently satisfactory empirical size and power characteristics to allow their recommendation for econometric practice.

Co-authored with Paulo M.D.C. Parente (Instituto Universitário de Lisboa).

« Back...

 

Empirical likelihood for public-use survey data
Changbao Wu, University of Waterloo, USA


In this paper we develop empirical likelihood methods for analyzing public-use survey data that contain only the variables of interest and the final adjusted and calibrated survey weights along with final replication weights. Asymptotic distributions of the empirical likelihood ratio statistics are derived for parameters defined through estimating equations. Finite sample performances of the empirical likelihood ratio confidence intervals, with comparisons to methods based on the estimating equation theory, are investigated through simulation studies. The proposed approaches make empirical likelihood a practically useful tool for users of complex survey data. This is joint work with J.N.K. Rao of Carleton University.

« Back...

 

Empirical likelihood based deviance information criterion
Teng Yin, NCS Pte Ltd


Empirical likelihood based semi-parametric methods have been used in Bayesian framework. The Deviance information criterion for model assessment and model comparison is constructed based on the posterior expectation of the deviance. In recent years, DIC has been applied to many statistical models. In this paper, we consider the empirical likelihood based deviance information criterion for Bayesian empirical likelihood. Furthermore, the validity of the measure for the effective number of parameters is also derived. Simulation results show that our method can be applied to variable selections in linear and generalized linear models. In addition, we investigate the influence of different prior distributions on posterior inference.

« Back...

 

Unified empirical likelihood ratio tests for functional concurrent linear models and the phase transition from sparse to dense functional data
Ping-Shou Zhong, Michigan State University, USA


We consider the problem of testing functional constraints in a class of functional concurrent linear models where both the predictors and the response are functional data measured at discrete time points. We propose test procedures based on the empirical likelihood with bias-corrected estimating equations to conduct both point-wise and simultaneous inferences. The asymptotic distributions of the test statistics are derived under the null and local alternative hypotheses, where sparse and dense functional data are considered in a unified framework. We find a phase transition in the asymptotic null distributions and the orders of detectable alternatives from sparse to dense functional data. Specifically, the proposed tests can detect alternatives of root-n order when the number of repeated measurements per curve is of an order larger than nƞ0 with n being the number of curves. The transition points ƞ0 for point-wise and simultaneous tests are different and both are smaller than the transition point in the estimation problem. Simulation studies and real data analyses are conducted to demonstrate the proposed methods.

« Back...

 
Best viewed with IE 7 and above