Extreme value estimators: Their long memory feature and forecasting performances in the U.S. stock indexes Open Access
This dissertation studies long memory and forecasting performances of extreme value volatility estimators which are constructed with the highest and lowest intraday prices. Several comparative studies (Rogers et al., 1994; Bali and Weinbaum, 2005; Shu and Zhang, 2006) show that extreme value estimators are more efficient than the traditional close-to-close estimator. Recognizing that the better efficiency has a potential to enhance prediction accuracy, several researchers examine the performance of the forecasts based on the estimators. Analyzing informational content of the estimators on future volatility, however, they overlook the long memory feature in the estimators. If the property is incorporated in prediction models, we have better chance to yield volatility forecasts which excels the existing ones. Despite such possibility, their long memory property and forecasting performances have been rarely studied on the same ground. This dissertation puts these lines of research together for the first time.First, I estimate long memory in the extreme value estimators. The degree of long memory is estimated by the ARFIMA (0,d,0), Geweke and Porter-Hudak (1983)'s semi-nonparametric test, and the Whittle's method. Applying the tests to the Parkinson and Garman-Klass estimators, I find ample evidence that the estimators possess long memory. In an effort to reflect recent studies (Granger and Hyung, 2004; Choi and Zivot, 2007) arguing that long memory in financial time series is overstated by multiple structural breaks, I conduct the long memory tests to break-eliminated series. From the examination, I continue to find significant long memory in the new series, but the degrees of long memory become smaller. These evidence demonstrate that the long memory processes can be suitable to model the extreme value estimators. Second, I examine if the long memory feature in the extreme value estimators enhance prediction performance. From forecast comparisons, I find that long memory ARFIMA forecasts underperform short memory ARMA forecasts. This result contradicts Bollerslev (2001) and Choi et al. (2006) who conjecture that long memory models can produce more accurate forecast than any other alternative model. Examining the performances of the long and short memory forecasts further, I find that the poor performance of the ARFIMA models are related to the presence of structural breaks in the forecast evaluation periods. Since (1) ARFIMA models are slow to react to structural breaks in the series because they attribute weights to distant lags when forming forecasts (Gabriel and Martins, 2004) and (2) the efficiencies of the extreme value estimators deteriorate in high volatility regimes (Brandt and Kinlay, n.d.), the ARFIMA models perform poorly. My result suggests some caution be given to Bollerslev (2001) who recommend long memory models for prediction. This supports Gabriel and Martins (2004) who argue that ARFIMA models yield poor forecasts when regime shifts occur in the series.Third, I test whether the extreme value estimator based forecasts are competitive compared to the forecasts with information on structural breaks in volatility. Motivated by Choi et al. (2006), I construct the break-adjusted forecasts which contain information on multiple breaks in volatility. Then, I compare them with the extreme value estimator based forecasts. From the comparisons, I find that the ARFIMA forecasts underperform the break-adjusted forecasts while the ARMA forecasts perform as good as or often better than the break-adjusted forecasts. This is not surprising because ARMA model copes with structural changes in time series better than ARFIMA model. This finding is consistent with Clements and Hendry (1998) who claim that simple linear time series models remain useful tools for prediction. However, it contradicts Diebold and Inoue (2001) who conjecture that long memory models can be useful for prediction even if the DGP (Data Generating Process) is affected by structural changes.Fourth, the extreme value estimator based forecasts are compared with several conventional forecasts. From the pairwise comparison, I find that the forecasts outperform the RiskMetricsTM and GARCH (1,1) forecasts although their superiority is not always statistically meaningful. However, the extreme value estimator based forecasts are mainly as good as or worse than asymmetric GARCH forecasts. Examining the performances of the two forecasts across volatility regimes, I find that the performances of the asymmetric GARCH forecasts are close to or superior to those of the extreme value estimator based forecasts when the markets are highly volatile. This is because the volatility asymmetry is more helpful to prediction in the high volatility regimes. This finding is in accordance with Jones (2003) reporting that the leverage effect becomes stronger when level of volatility increases. Finally, the extreme value estimator based forecasts underperform the forecasts based on the realized volatility proposed by Andersen et al. (2003). Considering that the realized volatility estimator has richer intraday information of stock markets than the extreme value estimators, this result is not surprising. Fifth, I apply forecast combination techniques, which have been popular in the fields of macroeconomic forecasting and decision science, to the field of volatility forecasting. The performances of the extreme value estimator based forecasts were somewhat disappointing since the forecasts did not perform significantly better than some of the benchmark forecasts. The combining techniques may enable us to exploit information in those forecasts selectively to predict future volatility. In an effort to examine this possibility, I combine the extreme value estimator based forecasts with a few GARCH type forecasts. Employing several combining methods proposed by past studies (Bates and Granger, 1969; Newbold and Granger, 1974; Bunn, 1975; Makridakis and Winkler, 1983; Clemen and Winkler, 1986), I combine the volatility forecasts. To evaluate the performances of the combined forecasts, they are compared with (1) their own component forecasts and (2) the realized volatility based forecasts proposed by Andersen et al. (2003). Conducting the Diebold-Mariano equal accuracy test, I find that the combination techniques are generally effective at improving prediction accuracy as long as the performances of two component forecasts are not far from each other.
Notice to Authors
If you are the author of this work and you have any questions about the information on this page, please use the Contact form to get in touch with us.