Financial Time Series Analysis: High-dimensionality, Non-stationarity and the Financial Crisis
(1 - 22 Jun 2012)


~ Abstracts ~

 

Which model to match?
Matteo Barigozzi, London School of Economics, UK


The asymptotic efficiency of the indirect estimation methods, such as the efficient method of moments and indirect inference, depends on the choice of the auxiliary model. Up to date, this choice is somehow ad hoc and based on an educated guess of the researcher. In this article we develop three information criteria that help the user to optimize the choice among nested and non-nested auxiliary models. They are the indirect analogues of the widely used Akaike, Bayesian and Hannan-Quinn criteria. A thorough Monte Carlo study based on two simple and illustrative models shows the usefulness of the criteria.

« Back...

 

CAW-DCC: A dynamic model for vast realized covariance matrices
Luc Bauwens, Université Catholique de Louvain, Belgium


A dynamic model for realized covariance matrices is proposed, assuming a Wishart conditional distribution. The expected value of the realized covariance matrix is specified in two steps: a model for each realized variance, and a model for the realized correlation matrix. The realized variance model is taken in the menu of existing univariate models. The realized correlation model is a dynamic conditional correlation model. Estimation is organized in two steps as well, and a quasi-ML interpretation is given to each step. Moreover, the model is applicable to large matrices since estimation can be done by the composite likelihood method.

« Back...

 

Modeling of high-dimensional time series via time-changed Levy processes: statistical inference without curse of dimensionality
Denis Belomestny, Universität Duisburg-Essen, Germany


The problem of semi-parametric inference on the parameters of a multidimensional Levy process with independent components based on the low-frequency observations of the corresponding time-changed Levy process will be considered. We show that this problem is closely related to the problem of composite function estimation that has recently got much attention in statistical literature. Under suitable identifiability conditions we propose a consistent estimate for the Levy density of the underlying Levy process and derive the uniform as well as the pointwise convergence rates that show that the curse of dimensionality can be avoided under additional but rather natural, from economical point of view, assumptions.

« Back...

 

Dynamic modeling and prediction of risk neutral densities
Rong Chen, Rutgers University, USA


Risk neutral density is extensively used in option pricing and risk management in finance. It is often implied using observed option prices through a complex nonlinear relationship. In this study, we model the dynamic structure of risk neutral density through time, investigate modeling approach, estimation method and prediction performances. State space models, Kalman filter and sequential Monte Carlo methods are used. Simulation and real data examples are presented.

« Back...

 

Sparse vector autoregressive modeling
Richard Davis, Columbia University, USA


The vector autoregressive (VAR) model has been widely used for modeling temporal dependence in a multivariate time series. For large (and even moderate) dimensions, the number of VAR parameters can be prohibitively large resulting in noisy estimates and difficult-to-interpret temporal dependence. As a remedy, we propose a methodology for fitting sparse VAR models (sVAR) in which most of the AR coefficients are set equal to zero. The first step in selecting the nonzero coefficients is based on an estimate of the partial squared coherency (PSC) together with the use of BIC. The PSC is useful for quantifying conditional relationships between marginal series in a multivariate time series. A refinement step is then applied to further reduce the number of parameters. The performance of this 2-step procedure is illustrated with both simulated data and several real examples, one of which includes financial data. (This is joint work with Tian Pengfei Zang and Tian Zheng.)

« Back...

 

Local-momentum GARCH
Jin-Chuan Duan, National University of Singapore


Variance-stationary GARCH models can be expressed in a volatility-reversion form, which can then be interpreted as stochastic volatilities mean-reverting towards the stationary volatility of the process. We observe that volatility exhibits momentum not in the sense of simple autoregressive mean-reversion; rather volatility being stochastic tends to hover around its recent level. Thus, we propose a local-momentum GARCH model (LM-GARCH) that has the property of locally reverting to a time-varying level determined by a combination of the stationary volatility and the historical volatility of a moving sample. We study the theoretical properties of the LM-GARCH model and empirically demonstrate with the S&P 500 index returns that this new model with one extra parameter offers far superior empirical performance over its conventional counterpart. In fact, it also dominates the two-component GARCH model which has more parameters and was designed to allow for time-varying long-run volatilities.

« Back...

 

Distance-to-default - concept and implementation
Jin-Chuan Duan, National University of Singapore


This talk begins with introducing the structural credit risk model of Merton (1974), and with which one can define the distance-to-default (DTD) measure. I will discuss the conceptual advantage of using DTD as opposed to using leverage. The Merton model relies on the use of the the asset value dynamic, but the market value of firm's assets cannot be directly observed. This challenge has generated a literature on ways of implmenting the Merton model. I will introduce several ad hoc methods and discuss their shortcomings and/or limitations. A way of estimating DTD that is popular in the academic literature and the industry applications is the KMV method. I will describe this method and its limitations, particularly in terms of its application to financial firms. In addition, I will introduce the transformed-data maximum likelihood method of Duan (1994) for the Merton model, and show how it can be used to effectively handle the DTD estimation for financial and non-financial firms.

« Back...

 

Dynamic default predictions - spot and forward intensity approaches
Jin-Chuan Duan, National University of Singapore


This talk begins with a short discussion on the nature of corporate default predictions and the elements required of a good default prediction model. A family of dynamic models based on doubly stochastic Poisson processes is introduced as a device to relate common risk factors and individual attributes to observed defaults while handling the censoring effect arising from other forms of firm exit such as mergers and aquisitions. We describe two implementation frameworks based on spot intensity (Duffie, Saita and Wang, 2007) and forward intensity (Duan, Sun and Wang, 2011), respectively. The discussions will cover their conceptual foundation, econometric formulation, implementation issues and empirical findings using the US corporate data. The talk will also touch upon the role of momentum in default prediction and on how to measure distance-to-default for financial firms. I would also argue for the forward intensity method because it is easily scalable for practical applications that inevitably deal with a large number of firms and many covariates. In fact, the forward intensity method has been successfully implemented by the non-profit Credit Research Initiative (CRI) at the Risk Management Institute of National University Singapore to power its default prediction system which produces daily updated default probabilities on about 30,000 exchange-listed firms in 30 economies in Asia, North America and Europe.

« Back...

 

Leverage effect puzzle
Jianqing Fan, Princeton University, USA


The leverage effect parameter refers to the correlation between asset returns and their volatility. A natural estimate of such a parameter is to use correlation between the daily returns and the changes of daily volatility estimated by high-frequency data. The puzzle is that such an estimate yields nearly zero correlation for all assets such as SP500 and Dow Jone Industrial Average that we have tested. To solve the puzzle, We develop a model to understand the bias problem. The asymptotic biases involved in high frequency estimation of the leverage effect parameter are derived. They quantify the biases due to discretization errors in estimating spot volatilities, biases due to estimation error, and the biases due to market microstructure noises. They enable us to propose novel bias correction methods for estimating the leverage effect parameter. The proposed methods are convincingly illustrated by simulation studies and several empirical applications.

Joint work with Yacine Ait-Sahalia and Yingying Li.

« Back...

 

Sudden changes in the structure of time series
Jürgen Franke, Universität Kaiserslautern, Germany


Time series, in particular those in finance, sometimes switch from one regime to another, e.g. from a low volatile market to a state with higher risk. We consider tests for detecting such changepoints which do not require that the model under consideration is correctly specified.

For situations where the time series rarely, but repeatedly switches between a finite number of regimes, we consider Markov switching models for time series and algorithms for fitting them to data as well as filtering algorithms for reconstructing the sequence of hidden states driving the observed process.

In finance, time series of interest are frequently high-dimensional. In that case, standard estimation algorithms do not work well. As a first step towards handling those difficulties, we consider a hidden Markov model with high-dimensional conditionally Gaussian data, which may be interpreted as the vector of asset returns from some large portfolio. For that rather simple Markov-switching model, we discuss an approach based on shrinkage which allows for considerably better estimates for model parameters like covariance matrices of the regimes and the transition matrix of the hidden Markov chain.

« Back...

 

Mixed frequency vector autoregressive models and the consequences of ignoring high frequency data
Eric Ghysels, University of North Carolina, USA


Many time series are sampled at different frequencies. When we study co-movements between such series we usually analyze the joint process sampled at a common low frequency. This has consequences in terms of potentially mis-specifying the comovements and hence the analysis of impulse response functions - a commonly used tool for economic policy analysis. We introduce a class of mixed frequency VAR models that allows us the measure the impact of high frequency data on low frequency and vice versa. This allows us to characterize explicitly the mis-specification of a traditional common low frequency VAR and its implied mis-specified impulse response functions. The class of mixed frequency VAR models can also characterize the timing of information releases for a mixture of sampling frequencies and the real-time updating of predictions caused by the flow of high frequency information. The mixed frequency VAR is an alternative to commonly used state space models for mixed frequency data. State space models involve latent processes, and therefore rely on filtering to extract hidden states that are used in order to predict future outcomes. Hence, they are parameter-driven models whereas mixed frequency VAR models are observation-driven models as they are formulated exclusively in terms of observable data and do not involve latent processes and thus avoid the need to formulate measurement equations, filtering etc. We also propose various parsimonious parameterizations, in part inspired by recent work on MIDAS regressions, for mixed frequency VAR models. Various estimation procedures are also proposed, both classical and Bayesian. Numerical and empirical examples quantify the consequences of ignoring mixed frequency data.

« Back...

 

Volatility of price indices for heterogeneous goods
Christian Hafner, Université Catholique de Louvain, Belgium


Hedonic regression is a common tool to construct price indices of markets with heterogenous goods such as, for example, the real estate and the art markets. Some work has been concerned with extending the classical model assumptions to robustify the index estimates. However, there does not seem to be a systematic treatment of volatility in these markets. Considering heterogenous goods as alternative investments, this lack of reliable volatility measures prevents an objective assessment of investment opportunities based on classical mean-variance criteria. Moreover, derivatives on subsets of the traded goods require a precise modelling and estimation of the underlying volatility. For example, in art markets, auction houses are interested in derivatives for collections or individual art objects in order to hedge their risks. In this paper we propose a new model which explicitly defines an underlying stochastic process for the price index. The model can be estimated using maximum likelihood and an extended version of the Kalman filter. We derive theoretical properties of the volatility estimator and show that it outperforms the standard estimator. To illustrate the usefulness of the model, we apply it to a large data set of international blue chip artists.

« Back...

 

Asymptotic theory of the QMLE for GARCH-X Models with stationary and nonstationary covariates
Hee Joon Han, National University of Singapore


This paper investigates the asymptotic properties of the Gaussian quasi-maximum-likelihood estimators (QMLE's) of the GARCH model augmented by including an additional explanatory variable - the so-called GARCH-X model. The additional covariate is allowed to exhibit any degree of persistence as captured by its long-memory parameter d_{x}; in particular, we allow for both stationary and non-stationary covariates. We show that the QMLE's of the regression coefficients entering the volatility equation are consistent and normally distributed in large samples independently of the degree of persistence. As such standard inferential tools such as t-statistics do not have to be adjusted to the level of persistence. On the other hand, the intercept in the volatility equation is not identified when the covariate is non-stationary which is akin to the results of Jensen and Rahbek (2004, Econometric Theory 20) who consider pure GARCH models with explosive volatility.

« Back...

 

Application of functional data analysis to intraday financial data
Piotr Kokoszka, Colorado State University, USA


The talk will review several applications of the ideas of functional data analysis to high frequency intraday data. I will introduce functional versions of the capital asset pricing model, factor models, functional autoregressions , intraday volatility curves and temporal predictability of the intraday price curves. The unifying theme of the ideas that I will discuss is their focus on the shape of the curves derived from high frequency data, rather than the small scale variability between trades. Mathematical ideas underlying this work will also be briefly discussed.

« Back...

 

Factor modeling for high dimensional time series
Clifford Lam, London School of Economics, UK


We investigate factor modelling when the number of time series grows with sample size. In particular we focus on the case when the number of time series is at least has the same order as the sample size. We introduce a method utilizing the autocorrelations of time series for estimation of the factor loading matrix and the factors series, which in the end is equivalent to an eigenanalysis of a non-negative definite matrix. Asymptotic properties will be presented, as well as the choice of the number of factors by an eye-ball test. Theories about such an eye-ball test will also be presented. The method will be illustrated with an analysis of a set of mean sea level pressure (MSLP) data, as well as extensive simulation results. Some new results about standard PCA will also be presented, which shows that PCA still works when the noise vector in the factor model is cross-sectionally correlated to a certain extent, beyond which consistency of factor loading matrix is not guaranteed. The method we introduce, however, do not suffer from heavy cross-sectional correlations in the noise. An improvement of our method will also be introduced when we have time, showing that it is possible to achieve the superb performance of PCA under classical settings, and at the same time performs better than PCA when noise level and cross-sectional correlations becomes strong.

« Back...

 

Testing for structural change in quantile regression models
Tatsushi Oka, National University of Singapore


This paper considers subgradient-based tests for structural change in linear quantile regression models at an unknown timing, allowing for serially dependent errors. Unlike the previous studies assuming uncorrelated errors in quantile models, the limiting distribution of subgradient process involves a long-run variance-covariances estimator in asymptotics. The commonly used nonparametric spectral density estimators could lead to non-monotonic power of tests as documented in the literature. To circumvent the non-monotonic power problem, we adapt a self-normalized approach, which uses a normalization matrix constructed by recursive estimates of subgradient based on subsample, instead of variance estimates. Our test statistics can be used to detect structural change in not only a specific regression quantile but also multiple distinctive regression quantiles. We show that asymptotically null distributions of tests based on the self-normalized approach are nuisance parameters free and their critical values are easily obtained by simulation, as long as the number of regression quantiles of interest is finite. The testing procedure here does not require the selection of bandwidth parameters and is easily implemented. The finite sample behavior of test statics are examined through simulation experiments.

« Back...

 

The value of model sophistication: an application to pricing Dow Jones industrial average options
Jeroen Rombouts, HEC Montréal, Canada


We assess the predictive accuracy of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set 248 multivariate mod- els that differ in their specification of the conditional variance, conditional correlation, and innovation distribution. All models belong to the dynamic conditional correlation class which is particularly suited because it allows to consistently estimate the risk neu- tral dynamics with a manageable computational effort in relatively large scale problems. It turns out that the most important gain in pricing accuracy comes from increasing the sophistication in the marginal variance processes (i.e. nonlinearity, asymmetry and component structure). Enriching the model with more complex correlation models, and relaxing a Gaussian innovation for a Laplace innovation assumption improves the pricing in a smaller way. Apart from investigating directly the value of model sophistication in terms of dollar losses, we also use the model confidence set approach to statistically infer the set of models that delivers the best pricing performance.

« Back...

 

Time-varying multivariate spectral analysis with applications to evolutionary factor analysis
Rainer von Sachs, Université Catholique de Louvain, Belgium


In recent work Eichler, Motta and von Sachs (J. Econometrics 2011) developed a time-evolving version of dynamic factor analysis based on principal component analysis in the frequency domain. Motivated from economic or financial data observed over long time periods which show smooth transitions over time in their covariance structure, the dynamic structure of the factor model has been rendered non-stationary over time, by proposing a deterministic time variation of its loadings. In order to perform this approach from observed time series panels a consistent estimator of the time-varying spectral density matrix of the data is necessary. In this talk we present a new approach to adaptively estimate an evolutionary spectrum of a locally stationary process by a generalization of the recently developed tree-structured wavelet estimator of Freyermuth et al (JASA 2010, EJS 2011) to a 2-dimensional smoother over time and frequency. This approach appears as a more practical compromise between a fully non-linear threshold estimator advertised in Neumann and von Sachs (AoS 1998) and a non-adaptive linear (kernel) smoother. We discuss its potential for the development of a (more interpretable) segmented time-varying factor analysis, with loadings that behave approximately stationary over segments of time-constancy between breakpoints - potentially caused by external market events such as political, financial or environmental crises, changing monetary policy decisions and alike.

« Back...

 

Is idiosyncratic volatility idiosyncratic? Using high-frequency data to measure idiosyncratic volatility
Kevin Sheppard, University of Oxford, UK


This paper studies the issue of estimating idiosyncratic volatility using high-frequency data. Particular attention is given to the case where prices are contaminated by market microstructure noise. The estimator of the idiosyncratic variance is shown to be consistent and asymptotically normal. The new estimator is applied to extract the idiosyncratic volatility of the S&P 500 where it is found that idiosyncratic variance are highly (cross-sectionally) dependent.

« Back...

 

Bernstein von Mises theorem and model calibration
Vladimir Spokoiny, Weierstraß-Institut für Angewandte Analysis und Stochastik, Germany


The paper considers the problem of calibrating the option price models from market data. This problem can be treated as a choice of proper estimation scheme in a nonlinear inverse problem. We suggest a Bayesian approach with a regular prior and show that under mild conditions the posterior is nearly Gaussian and close to the distribution of the penalized MLE.

« Back...

 

Discrete Fourier transform methods in the analysis of nonstationary time series
Suhasini Subba Rao, Texas A&M University, USA


The Discrete Fourier Transform (DFT) is often used to analysis time series, usually under the assumption of (second order) stationarity of the time series. One of the main reasons for using this transformation is that the DFT tends to uncorrelate the original time series. Thus the DFT can be treated as an almost uncorrelated complex random variable, and standard methods for independent data can be applied to the DFT. It can be shown that this useful uncorrelation property does not hold for nonstationary time series. This would suggest that the DFT is no longer a helpful tool for nonstationary time series analysis. However, the purpose of this talk is to demonstrate that correlations between the DFTs contain useful information about the nonstationary nature of the underlying time series. We will exploit this property to both test for stationarity of the time series and estimate the time-varying (nonstationary) spectral density function. More precisely, we will use the starkly contrasting correlation properties of the DFTs between stationarity and nonstationarity to construct a test for stationarity. Motivated by this test, in particular it's behaviour under the alternative of local stationarity we construct an estimator of the time-varying spectral density function.

« Back...

 

The low-rank correlation matrix problems: nonconvex regularization and successive convex relaxations
Defeng Sun, National University of Singapore


In this talk, we start with introducing the problem of finding a nearest correlation matrix of exact low rank from a collection of independent noisy observations of entries. For this problem, the widely-used nuclear norm regularization technique can no longer be applied to encourage a low-rank solution. Here, we propose to add a nonconvex regularization term to address this issue. This nonconvex term can be successively replaced by a sequence of linear terms. Each of the resulting convex optimization problems can be easily written as an H-weighted least squares semidefinite programming problem, which can be efficiently solved by our recently developed methodologies, even for large-scale cases. Moreover, if the linearization term is selected from the observation matrix so that it can represent the rank information, then the convex relaxation problem in the first step is already sufficient in achieving our goal. In addition, we discuss the rank consistency of the later convex problem and provide non-asymptotic bounds on the estimation error. We then proceed to talk about the nearest correlation matrix optimization problem with a given rank constraint. We use a nonconvex Lipschitz continuous function to exactly represent the rank constraint and transform it to be a regularization term. This nonconvex problem is well approximated by successive convex relaxations. Finally, we touch the issues of extensions under a general setting.

« Back...

 

A Quasi-maximum likelihood approach for covariance matrix with high frequency data
Chengyong Tang, National University of Singapore


Estimating the integrated covariance matrix (ICM) of underlying assets from high frequency financial trading data is difficult due to contaminated data with microstructure noise, asynchronous trading records, and increasing data dimensionality. We study in this paper a quasi-maximum likelihood (QMLE) approach for estimating an ICM and explore a novel and convenient approach for evaluating the estimates both theoretically for its properties, and numerically for its practical implementations. We show that the QMLE estimate is consistent and asymptotically normal. Efficiency gain of the QMLE approach is theoretically quantified, and numerically demonstrated via extensive simulation studies. An application of the QMLE approach is demonstrated with real financial trading data.

« Back...

 

Stochastic models and statistical analysis for financial data
Yazhen Wang, University of Wisconsin-Madison, USA


Complex stochastic models are employed in modern finance, and cutting edge statistical methods are being used for inferences and computations for these models. Much of the theoretical development of contingent claims pricing models has been based on the continuous-time diffusion models.
More recently the continuous-time modeling approach is utilized in the analysis of high frequency financial data. On the other hand, empirical financial studies are often based on discrete time series models like GARCH and stochastic volatility models. The lectures intend to cover the financial stochastic models and their statistical inferences. Possible topics include option pricing and diffusions, GARCH and stochastic volatility models, high-frequency financial models. I will present statistical analysis for both low and high frequency financial data.

« Back...

 

Some computations in the credit derivatives pricing
Yongjin Wang, Nankai University, China


We suggested some typical stochastic processes for modelling the so-called "regulated" asset price dynamics in the financial market. Under this modelling we discussed the default issues, and the adavnced probabilistic analysis and performance allowed us to get the explicit solutions on the credit derivatives pricing.

« Back...

 

Algorithmic trading - mathematics, technology, finance and regulation
Sam Wong, CASH Dynamic Opportunities Investment Limited, Hong Kong


For many years, algorithms have been used in financial trading. Various algorithms derived from technical indicators and/or academic investment models have been developed by many software providers since 1980's. As the financial exchanges greatly enhanced their electronic infrastructure in the last few years, some algorithmic traders followed the trend and became the focus of discussion in the media. In this talk, a brief background of algorithmic trading will be reviewed. In particular, we will cover how high frequency trading, a particular form of algorithmic trading, is made feasible under the current structure. Some commonly used mathematical and technology concepts in algorithmic trading and corresponding risk management will be introduced. An analysis of the relationship between the algorithmic traders, the market makers, the financial exchanges and the general public will be presented. We will also summarize different proposals from international institutes on how to update the practice of financial regulation in order to maintain a level playing field under such a new paradigm of financial exchanges.

« Back...

 
Best viewed with IE 7 and above