Sunday, October 22, 2017

Pockets of Predictability

The possibility of localized "pockets of predictability", particularly in financial markets, is obviously intriguing.  Recently I'm noticing a similarly-intriguing pocket of research on pockets of predictability.  

The following paper, for example, was presented at 2017 the NBER-NSF Time Series conference at  Northwestern University, even if it is evidently not yet circulating:
"Pockets of Predictability", by Leland Farmer (UCSD), Lawrence Schmidt (Chicago), and Allan Timmermann (UCSD).  Abstract:  We show that return predictability in the U.S. stock market is a localized phenomenon, in which short periods, “pockets,” with significant predictability are interspersed with long periods with little or no evidence of return predictability. We explore possible explanations of this finding, including time-varying risk premia, and find that they are inconsistent with a general class of affine asset pricing models which allow for stochastic volatility and compound Poisson jumps. We find that pockets of return predictability can, however, be explained by a model of incomplete learning in which the underlying cash flow process is subject to change and investors update their priors about the current state. Simulations from the model demonstrate that investors’ learning about the underlying cash flow process can induce patterns that look, ex-post, like local return predictability, even in a model in which ex-ante expected returns are constant.

And this one just appeared as an NBER w.p.: "Sparse Signals in the Cross-Section of Returns", by Alexander M. Chinco, Adam D. Clark-Joseph, Mao Ye, NBER w.p. 23933, October 2017.
http://papers.nber.org/papers/w23933?utm_campaign=ntw&utm_medium=email&utm_source=ntw
Abstract: This paper applies the Least Absolute Shrinkage and Selection Operator (LASSO) to make rolling 1-minute-ahead return forecasts using the entire cross section of lagged returns as candidate predictors. The LASSO increases both out-of-sample fit and forecast-implied Sharpe ratios. And, this out-of-sample success comes from identifying predictors that are  unexpected, short-lived, and sparse. Although the LASSO uses a statistical rule rather than economic intuition to identify predictors, the predictors it identifies are nevertheless associated with economically meaningful events: the LASSO tends to identify as predictors stocks with news about fundamentals.

Here's some associated work in dynamical systems theory:  "A Mechanism for Pockets of Predictability in Complex Adaptive Systems", by Jorgen Vitting Andersen, Didier Sornette, Europhysics Letters, 2005.  https://arxiv.org/abs/cond-mat/0410762
 Abstract:  We document a mechanism operating in complex adaptive systems leading to dynamical pockets of predictability ("prediction days''), in which agents collectively take predetermined courses of action, transiently decoupled from past history. We demonstrate and test it out-of-sample on synthetic minority and majority games as well as on real financial time series. The surprising large frequency of these prediction days implies a collective organization of agents and of their strategies which condense into transitional herding regimes.

There's even an ETH Zürich master's thesis:  "In Search Of Pockets Of Predictability", by AT Morera, ‎2008
https://www.ethz.ch/content/dam/ethz/special-interest/mtec/chair-of-entrepreneurial-risks-dam/documents/dissertation/master%20thesis/Master_Thesis_Alan_Taxonera_Sept08.pdf

Finally, related ideas have appeared recently in the forecast evaluation literature, such as this paper and many of the references therein:  "Testing for State-Dependent Predictive Ability", by Sebastian Fossati, University of Alberta, September 2017.
 https://sites.ualberta.ca/~econwps/2017/wp2017-09.pdf
Abstract: This paper proposes a new test for comparing the out-of-sample forecasting performance of two competing models for situations in which the predictive content may be state-dependent (for example, expansion and recession states or low and high volatility states). To apply this test the econometrician is not required to observe when the underlying states shift. The test is simple to implement and accommodates several different cases of interest. An out-of-sample forecasting exercise for US output growth using real-time data illustrates the improvement of this test over previous approaches to perform forecast comparison.

Saturday, October 14, 2017

Machine Learning and Macro

Earlier I posted here on machine learning and central banking.  Here's something related.  

Last week Penn's Warren Center hosted a timely and stimulating conference, "Machine Learning for Macroeconomic Prediction and Policy".  The program appears below.  Papers were not posted, but with a little Googling you should be able to obtain those that are available.

Conference on Machine Learning for Macroeconomic Prediction and Policy

October 12 and 13, 2017

Glandt Forum, Singh Center for Nanotechnology


Co-Sponsored by Penn’s Warren Center for Network and Data Sciences
        and the Federal Reserve Bank of Philadelphia

Organizers: Michael Dotsey (FRBP), Jesus Fernandez-Villaverde (Penn), Michael Kearns (Penn)

SCHEDULE:

Thursday October 12:

8:00 Breakfast

8:45 Welcome

9:00 Stephen Hansen (University of Oxford): The Long-Run Information Effect of Central Bank Text

9:45 Stephen Ryan (Washington University): Classi cation Trees for Heterogeneous Moment-Based Models

10:30 Break

11:00 James Cowie (DeepMacro): DeepMacro Data Challenges

11:45 Galo Nuno (Banco de España): Machine Learning and Heterogeneous Agent Models

12:30 Lunch

1:30: Francis X. Diebold (Penn): Egalitarian LASSO for Combining Central Bank Survey Forecasts

2:15 Lyle Ungar (Penn): How to Make Better Forecasts

3:00 Vegard Larsen (Norges Bank): Components of Uncertainty

3:45 Break

4:15 Panel: ML and Econometrics: Similarities and Differences (Michael Kearns, Vegard Larsen, Stephen Hansen, Rakesh Vohra (Penn))

Friday October 13:

9:00 Aaron Smalter Hall (Federal Reserve Bank of Kansas City): Recession Forecasting with Bayesian Classification

9: 45 Susan Athey (Stanford GSB): Estimating Heterogeneity in Structural Parameters Using Generalized Random Forests

10:30 Break

11:00 Panel: ML Challenges at the Fed (Jose Canals-Cerda (Philadelphia Fed), Galo Nuno, Jesus Fernandez-Villaverde, Aaron Smalter Hall)

12:30 Lunch

Departures

Saturday, October 7, 2017

Long Memory in Realized Volatility

A noteworthy aspect of long memory in realized asset return volatility is that in many leading cases it's basically undeniable on the basis of a variety of evidence -- the question isn't existence but rather strength.  Hence it's useful to have a broad and comparable set of state-of-the-art (local Whittle) estimates together in one place, as in the interesting paper below.  For the most part it gets d in [.4, .6], consistent with my personal experience of d usually around .45, in the covariance stationary (finite variance) region d<.5, but close to the boundary.
http://d.repec.org/n?u=RePEc:han:dpaper:dp-601&r=ecm


Date:2017-07
By:Wenger, Kai ; Leschinski, Christian ; Sibbertsen, Philipp
The focus of the volatility literature on forecasting and the predominance of the conceptually simpler HAR model over long memory stochastic volatility models has led to the fact that the actual degree of memory estimates has rarely been considered. Estimates in the literature range roughly between 0.4 and 0.6 - that is from the higher stationary to the lower non-stationary region. This difference, however, has important practical implications - such as the existence or non-existence of the fourth moment of the return distribution. Inference on the memory order is complicated by the presence of measurement error in realized volatility and the potential of spurious long memory. In this paper we provide a comprehensive analysis of the memory in variances of international stock indices and exchange rates. On the one hand, we find that the variance of exchange rates is subject to spurious long memory and the true memory parameter is in the higher stationary range. Stock index variances, on the other hand, are free of low frequency contaminations and the memory is in the lower non-stationary range. These results are obtained using state of the art local Whittle methods that allow consistent estimation in presence of perturbations or low frequency contaminations.
Keywords:Realized Volatility; Long Memory; Perturbation; Spurious Long Memory
JEL:C12 C22 C58 G15
URL:http://d.repec.org/n?u=RePEc:han:dpaper:dp-601&r=ecm


Sunday, October 1, 2017

Economics Working Papers now in arXiv

Economics working papers are now a part of arXiv.  This is great news, as arXiv is the premier working paper hosting platform in mathematics and the mathematical / statistical sciencesThe Economics arXiv will start with a single subject area of Econometrics (econ.EM).  More economics subject areas will be added (of course), and moreover, subject areas can and will be subdivided.  Hats off to the econ.EM team (Victor Chernozhukov, MIT; Iván Fernández-Val, Boston University; Marc Henry, Penn State; Francesca Molinari, Cornell; Jörg Stoye, Bonn & Cornell; Martin Weidner, University College London).  The full announcement is here.

Sunday, September 24, 2017

Egalitarian LASSO for Forecast Combination

Here's a new one.  It was something of a long and winding road.  We introduce simple "egalitarian LASSO" procedures that set some combining weights to zero and shrink those remaining toward equality.  The feasible versions don't work very well, due do difficulties associated with cross-validating tuning parameters in small samples, but the lessons learned in studying the infeasible version turn out to be very valuable -- indeed they directly motivate a new procedure, which we call "best <N-averaging", which solves the cross-validation problem and performs intriguingly well.

Diebold, F.X. and Shin, M. (2017), “Beating the Simple Average:  Egalitarian LASSO for Combining Economic Forecasts”, Penn Institute for Economic Research (PIER) Working Paper No. 17-017, available at SSRN: https://ssrn.com/abstract=3032492.

Friday, September 22, 2017

National Bank of Poland

It strikes me that I'm seeing progressively more research in dynamic predictive modeling from the National Bank of Poland.  A few recent examples appear below.  Related information is here.  Nice job.

AuthorTitle
Karol Szafranek
Bagged artificial neural networks in forecasting inflation: An extensive comparison with current modelling frameworks
Date
Number
Download
2017
262
 (PDF)

AuthorTitle
Siem Jan Koopman
André Lucas
Marcin Zamojski

Dynamic term structure models with score-driven time-varying parameters: estimation and forecasting
Date
Number
Download
2017
258
 (PDF)

AuthorTitle
Piotr Bańbuła
Marcin Pietrzak

Early warning models of banking crises applicable to non-crisis countries
Date
Number
Download
2017
257
 (PDF)

AuthorTitle
Alessia Paccagnini
Forecasting with FAVAR: macroeconomic versus financial factors
Date
Number
Download
2017
256
 (PDF)

AuthorTitle
Paweł Pońsko
Bartosz Rybaczyk

Fan chart – a tool for NBP’s monetary policy making
Date
Number
Download
2016
241
 (PDF)

AuthorTitle
Halina Kowalczyk
Ewa Stanisławska

Are experts’ probabilistic forecasts similar to the NBP projections?
Date
Number
Download
2016
238
 (PDF)

Sunday, September 17, 2017

Machine Learning Meets Central Banking

Here's a nice new working paper from the Bank of England.  There's nothing new methodologically, but there are three fascinating and detailed applications / case studies (banking supervision under imperfect information, UK CPI inflation forecasting, unicorns in financial technology).  For your visual enjoyment I include their Figure 19 below.  (It's the network graph for global technology start-ups in 2014, not spin-art...)

Monday, September 11, 2017

2017 NBER-NSF Time Series Meeting

Just back from 2017 NBER-NSF Time Series at Northwestern.  Quite a feast -- my head is spinning.  Program dumped below; formatted version here.  Many thanks to the program committee for producing this event, and more generally for keeping the series going, year after year, stronger than ever.  (See here for some history and links to past locations, programs, etc.)

The papers were very strong.  Among those that I found particularly interesting are:

-- Moon.  Forecasting in short panels.  You'd think it would be impossible since you need the individual effects.  But it's not.

“Forecasting with Dynamic Panel Data Models”, Hyungsik Roger Moon (University of Southern California), Laura Liu, and Frank Schorfheide

-- Shephard.  Causal estimation meets time series.

“Time series experiments, causal estimands and exact p-values”, Neil Shephard (Harvard University) and Iavor Bojinov

-- The entire (and marvelously-coherent) "Lumsdaine Sesssion" (Pruitt, Pelger, Giglio).  Real progress on econometric methods for identifying financial-market risk factors, with sharp empirical results. 

“Instrumented Principal Component Analysis”, Seth Pruitt (Arizona State University), Bryan Kelly, and Yinan Su
“Estimating Latent Asset-Pricing Factors”, Markus Pelger (Stanford University) and Martin Lettau

“Inference on Risk Premia in the Presence of Omitted Factors”, Stefano Giglio (University of Chicago) and Dacheng Xiu



------------------

2017 NBER-NSF Time Series Conference
Friday, September 8 – Saturday, September 9
Kellogg School of Management
Kellogg Global Hub
2211 N Campus Drive; Evanston, IL 60208
Friday, September 8
Registration begins 10:20am (White Auditorium)
Welcome and opening remarks: 10:50am
Session 1: 11:00am – 12:30pm
Chair: Ruey S. Tsay (University of Chicago)
 “Egalitarian Lasso for Shrinkage and Selection in Forecast Combination” Francis X. Diebold (University of Pennsylvania) and Minchul Shin
 “Forecasting with Dynamic Panel Data Models” Hyungsik Roger Moon (University of Southern California), Laura Liu, and Frank Schorfheide
 “Large Vector Autoregressions with Stochastic Volatility and Flexible Priors” Andrea Carriero (Queen Mary University of London), Todd E. Clark, and Massimiliano Marcellino
12:30pm - 2:00pm: Lunch and Poster Session 1 (Faculty Summit, 4th Floor)
 “The Dynamics of Expected Returns: Evidence from Multi-Scale Time Series Modeling“ Daniele Bianchi (University of Warwick)
 “Testing for Unit-root Non-stationarity against Threshold Stationarity” Kung-Sik Chan (University of Iowa)
 “Group Orthogonal Greedy Algorithm for Change-point Estimation of Multivariate Time Series” Ngai Hang Chan (The Chinese University of Hong Kong)
 “The Impact of Waiting Times on Volatility Filtering and Dynamic Portfolio Allocation” Dobrislav Dobrev (Federal Reserve Board of Governors)
 “Testing for Mutually Exciting Jumps and Financial Flights in High Frequency Data” Mardi Dungey (University of Tasmania), Xiye Yang (Rutgers University) presenting
 “Pockets of Predictability” Leland E. Farmer (University of California, San Diego)
 “Factor Models of Arbitrary Strength” Simon Freyaldenhoven (Brown University)
 “Inference for VARs Identified with Sign Restrictions” Eleonora Granziera (Bank of Finland)
 “The Time-Varying Effects of Conventional and Unconventional Monetary Policy: Results from a New Identification Procedure” Atsushi Inoue (Vanderbilt University)
 “On spectral density estimation via nonlinear wavelet methods for non-Gaussian linear processes” Linyuan Li (University of New Hampshire)
 “Multivariate Bayesian Predictive Synthesis in Macroeconomic Forecasting” Kenichiro McAlinn (Duke University)
 “Periodic dynamic factor models: Estimation approaches and applications” Vladas Pipiras (University of North Carolina)
 “Canonical stochastic cycles and band-pass filters for multivariate time series” Thomas M. Trimbur (U. S. Census Bureau)
Session 2: 2:00pm - 3:30pm
Chair: Giorgio Primiceri (Northwestern University)
 “Understanding the Sources of Macroeconomic Uncertainty” Tatevik Sekhposyan (Texas A&M University), Barbara Rossi, and Matthieu Soupre
 “Safety, Liquidity, and the Natural Rate of Interest” Marco Del Negro (Federal Reserve Bank of New York), Domenico Giannone, Marc P. Giannoni, and Andrea Tambalotti
 “Structural Interpretation of Vector Autoregressions with Incomplete Identification: Revisiting the Role of Oil Supply and Demand Shocks” Christiane Baumeister (University of Notre Dame) and James D. Hamilton
Afternoon Break: 3:30pm-4:00pm
Session 3: 4:00pm – 5:30pm
Chair: Serena Ng (Columbia University)
 “Controlling the Size of Autocorrelation Robust Tests” Benedikt M. Pötscher (University of Vienna) and David Preinerstorfer
 “Heteroskedasticity Autocorrelation Robust Inference in Time Series” Regressions with Missing Data Timothy J. Vogelsang (Michigan State University) and Seung-Hwa Rho
 “Time series experiments, causal estimands and exact p-values” Neil Shephard (Harvard University) and Iavor Bojinov
5:30pm – 7pm: Cocktail Reception and Poster Session 2 (Faculty Summit, 4th Floor)
 “Macro Risks and the Term Structure of Interest Rates” Andrey Ermolov (Fordham University)
 “Holdings-based Fund Performance Measures: Estimation and Inference” Wayne E. Ferson (University of Southern California), Junbo L. Wang (Louisiana State University) presenting
 “Economic Predictions with Big Data: The Illusion of Sparsity” Domenico Giannone (Federal Reserve Bank of New York)
 “Estimation and Inference of Dynamic Structural Factor Models with Over-identifying Restrictions” Xu Han (City University of Hong Kong)
 “Bayesian Predictive Synthesis: Forecast Calibration and Combination” Matthew C. Johnson (Duke University)
 “Time Series Modeling on Dynamic Networks” Jonas Krampe (TU Braunschweig)
 “The Complexity of Bank Holding Companies: A Topological Approach” Robin L. Lumsdaine (American University)
 “Sieve Estimation of Option Implied State Price Density” Zhongjun Qu (Boston University) - Junwen Lu (Boston University) presenting
 “Linear Factor Models and the Estimation of Expected Returns” Cisil Sarisoy (Northwestern University)
 “Efficient Parameter Estimation for Multivariate Jump-Diffusions” Gustavo Schwenkler (Boston University)
 “News-Driven Uncertainty Fluctuations” Dongho Song (Boston College)
 “Contagion, Systemic Risk and Diagnostic Tests in Large Mixed Panels” Cindy S.H. Wang (National Tsing Hua University and CORE, University Catholique de Louvain)
7-10pm: Dinner (White Auditorium)
 Dinner speaker: Nobel Laureate Robert F. Engle
Saturday, September 9
Continental Breakfast: 8:00am – 8:30am
Registration begins 8:30am (White Auditorium)
Session 4: 9:00am – 10:30am
Chair: Thomas Severini (Northwestern University)
 “Estimation of time varying covariance matrices for large datasets” Liudas Giraitis (Queen Mary University of London), Y. Dendramis, and G. Kapetanios
 “Indirect Inference With(Out) Constraints” Eric Renault (Brown University) and David T. Frazier
 “Edgeworth expansions for a class of spectral density estimators and their applications to interval estimation” S.N. Lahiri (North Carolina State University) and A. Chatterjee
Morning Break: 10:30am-11:00am
Session 5: 11:00am-12:30pm
Chair: Robin L. Lumsdaine (American University)
 “Instrumented Principal Component Analysis” Seth Pruitt (Arizona State University), Bryan Kelly, and Yinan Su
 “Estimating Latent Asset-Pricing Factors” Markus Pelger (Stanford University) and Martin Lettau
 “Inference on Risk Premia in the Presence of Omitted Factors” Stefano Giglio (University of Chicago) and Dacheng Xiu
12:30pm-2pm: Lunch and Poster Session 3 (Faculty Summit, 4th Floor)
 “Regularizing Bayesian Predictive Regressions” Guanhao Feng (City University of Hong Kong)
 “Good Jumps, Bad Jumps, and Conditional Equity Premium” Hui Guo (University of Cincinnati)
 “High-dimensional Linear Regression for Dependent Observations with Application to Nowcasting” Yuefeng Han (The University of Chicago)
 “Maximum Likelihood Estimation for Integer-valued Asymmetric GARCH (INAGARCH) Models” Xiaofei Hu (BMO Harris Bank, N.A.)
 “Tail Risk in Momentum Strategy Returns” Soohun Kim (Georgia Institute of Technology)
 “The Perils of Counterfactual Analysis with Integrated Processes” Marcelo C. Medeiros (Pontifical Catholic University of Rio de Janeiro) and Ricardo Masini (Pontifical Catholic University of Rio de Janeiro)
 “Anxious unit root processes” Jon Michel (The Ohio State University)
 “Limiting Local Powers and Power Envelopes of Panel AR and MA Unit Root Tests” Katsuto Tanaka (Gakushuin University)
 “High-Frequency Cross-Market Trading: Model Free Measurement and Applications”
Ernst Schaumburg (AQR Capital Management, LLC) – Dobrislav Dobrev (Federal Reserve Board of Governors) presenting
 “A persistence-based Wold-type decomposition for stationary time series” Claudio Tebaldi (Bocconi University)
 “Necessary and Sufficient Conditions for Solving Multivariate Linear Rational Expectations Models and Factoring Matrix Polynomials” Peter A. Zadrozny (Bureau of Labor Statistics)
Session 6: 2:00pm – 3:30pm
Chair: Beth Andrews (Northwestern University)
 “Models for Time Series of Counts with Shape Constraints” Richard A. Davis (Columbia University) and Jing Zhang
 “Computationally Efficient Distribution Theory for Bayesian Inference of High-Dimensional Dependent Count-Valued Data” Scott H. Holan (University of Missouri, U.S. Census Bureau), Jonathan R. Bradley, and Christopher K. Wikle
 “Functional Autoregression for Sparsely Sampled Data”
Daniel R. Kowal (Cornell University, Rice University)

Monday, September 4, 2017

More on New p-Value Thresholds

I recently blogged on a new proposal heavily backed by elite statisticians to "redefine statistical significance", forthcoming in the elite journal Nature Human Behavior. (A link to the proposal appears at the end of this post.) 

I have a bit more to say. It's not just that I find the proposal counterproductive; I have to admit that I also find it annoying, bordering on offensive.

I find it inconceivable that the authors' p<.005 recommendation will affect their own behavior, or that of others like them. They're all skilled statisticians, hardly so naive as to declare a "discovery" simply because a p-value does or doesn't cross a magic threshold, whether .05 or .005. Serious evaluations and interpretations of statistical analyses by serious statisticians are much more nuanced and rich -- witness the extended and often-heated discussion in any good applied statistics seminar.

If the p<.005 threshold won't change the behavior of skilled statisticians, then whose behavior MIGHT it change? That is, reading between the lines, to whom is the proposal REALLY addressed?  Evidently those much less skilled, the proverbial "practitioners", who the authors evidently hope to keep out of trouble by providing a rule of thumb that can at least be followed mechanically.

How patronizing.


------


Redefine Statistical Significance

Date: 2017
By:
Daniel Benjamin ; James Berger ; Magnus Johannesson ; Brian Nosek ; E. Wagenmakers ; Richard Berk ; Kenneth Bollen ; Bjorn Brembs ; Lawrence Brown ; Colin Camerer ; David Cesarini ; Christopher Chambers ; Merlise Clyde ; Thomas Cook ; Paul De Boeck ; Zoltan Dienes ; Anna Dreber ; Kenny Easwaran ; Charles Efferson ; Ernst Fehr ; Fiona Fidler ; Andy Field ; Malcom Forster ; Edward George ; Tarun Ramadorai ; Richard Gonzalez ; Steven Goodman ; Edwin Green ; Donald Green ; Anthony Greenwald ; Jarrod Hadfield ; Larry Hedges ; Leonhard Held ; Teck Hau Ho ; Herbert Hoijtink ; James Jones ; Daniel Hruschka ; Kosuke Imai ; Guido Imbens ; John Ioannidis ; Minjeong Jeon ; Michael Kirchler ; David Laibson ; John List ; Roderick Little ; Arthur Lupia ; Edouard Machery ; Scott Maxwell; Michael McCarthy ; Don Moore ; Stephen Morgan ; Marcus Munafo ; Shinichi Nakagawa ; Brendan Nyhan ; Timothy Parker ; Luis Pericchi; Marco Perugini ; Jeff Rouder ; Judith Rousseau ; Victoria Savalei ; Felix Schonbrodt ; Thomas Sellke ; Betsy Sinclair ; Dustin Tingley; Trisha Zandt ; Simine Vazire ; Duncan Watts; Christopher Winship ; Robert Wolpert ; Yu Xie; Cristobal Young ; Jonathan Zinman ; Valen Johnson

Abstract: We propose to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005.
http://d.repec.org/n?u=RePEc:feb:artefa:00612&r=ecm 

Sunday, August 27, 2017

New p-Value Thresholds for Statistical Significance

This is presently among the hottest topics / discussions / developments in statistics.  Seriously.  Just look at the abstract and dozens of distinguished authors of the paper below, which is forthcoming in one of the world's leading science outlets, Nature Human Behavior.

Of course data mining, or overfitting, or whatever you want to call it, has always been a problem, warranting strong and healthy skepticism regarding alleged "new discoveries".  But the whole point of examining p-values is to AVOID anchoring on arbitrary significance thresholds, whether the old magic .05 or the newly-proposed magic .005.  Just report the p-value, and let people decide for themselves how they feel.  Why obsess over asterisks, and whether/when to put them next to things?

Postscript:

Reading the paper, which I had not done before writing the paragraph above (there's largely no need, as the wonderfully concise abstract says it all), I see that it anticipates my objection at the end of a section entitled "potential objections":
Changing the significance threshold is a distraction from the real solution, which is to replace null hypothesis significance testing (and bright-line thresholds) with more focus on effect sizes and confidence intervals, treating the P-value as a continuous measure, and/or a Bayesian method.
Here here! Marvelously well put.

The paper offers only a feeble refutation of that "potential" objection:
Many of us agree that there are better approaches to statistical analyses than null hypothesis significance testing, but as yet there is no consensus regarding the appropriate choice of replacement. ... Even after the significance threshold is changed, many of us will continue to advocate for alternatives to null hypothesis significance testing. 
I'm all for advocating alternatives to significance testing.  That's important and helpful.  As for continuing to promulgate significance testing with magic significance thresholds, whether .05 or .005, well, you can decide for yourself.

Redefine Statistical Significance
Date:2017
By:Daniel Benjamin ; James Berger ; Magnus Johannesson ; Brian Nosek ; E. Wagenmakers ; Richard Berk ; Kenneth Bollen ; Bjorn Brembs ; Lawrence Brown ; Colin Camerer ; David Cesarini ; Christopher Chambers ; Merlise Clyde ; Thomas Cook ; Paul De Boeck ; Zoltan Dienes ; Anna Dreber ; Kenny Easwaran ; Charles Efferson ; Ernst Fehr ; Fiona Fidler ; Andy Field ; Malcom Forster ; Edward George ; Tarun Ramadorai ; Richard Gonzalez ; Steven Goodman ; Edwin Green ; Donald Green ; Anthony Greenwald ; Jarrod Hadfield ; Larry Hedges ; Leonhard Held ; Teck Hau Ho ; Herbert Hoijtink ; James Jones ; Daniel Hruschka ; Kosuke Imai ; Guido Imbens ; John Ioannidis ; Minjeong Jeon ; Michael Kirchler ; David Laibson ; John List ; Roderick Little ; Arthur Lupia ; Edouard Machery ; Scott MaxwellMichael McCarthy ; Don Moore ; Stephen Morgan ; Marcus Munafo ; Shinichi Nakagawa ; Brendan Nyhan ; Timothy Parker ; Luis PericchiMarco Perugini ; Jeff Rouder ; Judith Rousseau ; Victoria Savalei ; Felix Schonbrodt ; Thomas Sellke ; Betsy Sinclair ; Dustin TingleyTrisha Zandt ; Simine Vazire ; Duncan WattsChristopher Winship ; Robert Wolpert ; Yu XieCristobal Young ; Jonathan Zinman ; Valen Johnson

Abstract:  
We propose to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005.



http://d.repec.org/n?u=RePEc:feb:artefa:00612&r=ecm