SummaryWe propose a near-explosive random coefficient autoregressive model (NERC) to obtain predictive probabilities of the apparition and devolution of bubbles. The distribution of the autoregressive coefficient of this model is allowed to be centred at an O(T−α) distance of unity, with α ∈ (0, 1). When the expectation of the autoregressive coefficient lies on the explosive side of unity, the NERC helps to model the temporary explosiveness of time series and obtain related predictive probabilities. We study the asymptotic properties of the NERC and provide a procedure for inference on the parameters. In empirical illustrations, we estimate predictive probabilities of bubbles or flash crashes in financial asset prices.
SummaryThis paper provides an orthogonal extension of the semiparametric difference-in-differences estimator proposed in earlier literature. The proposed estimator enjoys the so-called Neyman orthogonality (Chernozhukov et al., 2018), and thus it allows researchers to flexibly use a rich set of machine learning methods in the first-step estimation. It is particularly useful when researchers confront a high-dimensional data set in which the number of potential control variables is larger than the sample size and the conventional nonparametric estimation methods, such as kernel and sieve estimators, do not apply. I apply this orthogonal difference-in-differences estimator to evaluate the effect of tariff reduction on corruption. The empirical results show that tariff reduction decreases corruption in large magnitude.
SummaryThis manuscript proposes a new approach for unveiling existing linkages within the international oil market across multiple driving factors beyond production. A multilayer, multicountry network is extracted through a novel Bayesian graphical vector autoregressive model, which allows for a more comprehensive, dynamic representation of the network linkages than do traditional or static pairwise Granger-causal inference approaches. Building on previous work, the layers of the network include country- and region-specific oil production levels and rigs, both through simultaneous and lagged temporal dependences among key factors, while controlling for oil prices and a world economic activity index. The proposed approach extracts relationships across all variables through a dynamic, cross-regional network. This approach is highly scalable and adjusts for time-evolving linkages. The model outcome is a set of time-varying graphical networks that unveil both static representations of world oil linkages and variations in microeconomic relationships both within and between oil producers. An example is provided, illustrating the evolution of intra- and inter-regional relationships for two major interconnected oil producers: the United States, with a regional decomposition of its production and rig deployment, and the Arabian Peninsula and key Middle Eastern producers, with a country-based decomposition of production and rig deployment, while controlling for oil prices and global economic indices. Production is less affected by concurrent changes in oil prices and the overall economy than rigs. However, production is a lagged driver for prices, rather than rigs, which indicates that the linkage between rigs and production may not be fully accounted for in the markets.
Wild bootstrap for fuzzy regression discontinuity designs: obtaining robust bias-corrected confidence intervals
SummaryThis paper develops a novel wild bootstrap procedure to construct robust bias-corrected valid confidence intervals for fuzzy regression discontinuity designs, providing an intuitive complement to existing robust bias-corrected methods. The confidence intervals generated by this procedure are valid under conditions similar to the procedures proposed by Calonico et al. (2014) and related literature. Simulations provide evidence that this new method is at least as accurate as the plug-in analytical corrections when applied to a variety of data-generating processes featuring endogeneity and clustering. Finally, we demonstrate its empirical relevance by revisiting Angrist and Lavy (1999) analysis of class size on student outcomes.
SummaryThis paper investigates undesirable limitations of widely used count data instrumental variable models. To overcome the limitations, I propose a partially identifying single-equation model that requires neither strong separability of unobserved heterogeneity nor a triangular system. Sharp bounds (identified sets) of structural features are characterised by conditional moment inequalities. Numerical examples show that the size of an identified set can be very small when the support of an outcome is rich or instruments are strong. An algorithm for estimation and inference is presented. I illustrate the usefulness of the proposed model in an empirical application to effects of supplemental insurance on healthcare utilisation.
SummaryWe study accelerated failure time models in which the survivor function of the additive error term is log-concave. The log-concavity assumption covers large families of commonly used distributions and also represents the aging or wear-out phenomenon of the baseline duration. For right-censored failure time data, we construct semiparametric maximum likelihood estimates of the finite-dimensional parameter and establish the large sample properties. The shape restriction is incorporated via a nonparametric maximum likelihood estimator of the hazard function. Our approach guarantees the uniqueness of a global solution for the estimating equations and delivers semiparametric efficient estimates. Simulation studies and empirical applications demonstrate the usefulness of our method.
SummaryThe classical problem of the monopolist faced with an unknown demand curve is considered in a simple stochastic setting. Sequential pricing strategies designed to maximize discounted profits are shown to converge sufficiently rapidly that they leave the monopolist ignorant about all but the most local features of demand. The failure of the monopolist to 'learn' his demand curve would seem to call into question some standard assumptions about agents’ grasp of their economic environment.
SummaryModern empirical work in regression discontinuity (RD) designs often employs local polynomial estimation and inference with a mean square error (MSE) optimal bandwidth choice. This bandwidth yields an MSE-optimal RD treatment effect estimator, but is by construction invalid for inference. Robust bias-corrected (RBC) inference methods are valid when using the MSE-optimal bandwidth, but we show that they yield suboptimal confidence intervals in terms of coverage error. We establish valid coverage error expansions for RBC confidence interval estimators and use these results to propose new inference-optimal bandwidth choices for forming these intervals. We find that the standard MSE-optimal bandwidth for the RD point estimator is too large when the goal is to construct RBC confidence intervals with the smaller coverage error rate. We further optimize the constant terms behind the coverage error to derive new optimal choices for the auxiliary bandwidth required for RBC inference. Our expansions also establish that RBC inference yields higher-order refinements (relative to traditional undersmoothing) in the context of RD designs. Our main results cover sharp and sharp kink RD designs under conditional heteroskedasticity, and we discuss extensions to fuzzy and other RD designs, clustered sampling, and pre-intervention covariates adjustments. The theoretical findings are illustrated with a Monte Carlo experiment and an empirical application, and the main methodological results are available in R and Stata packages.