Discussion about this post

User's avatar
Craig Masters, Ph.D.'s avatar

Though I come from an academic background, I can imagine a situation in which a team faces a strict deadline and perhaps over-cleans data just to ensure their model passes backtesting. One problem might be that this introduces a kind of backtesting pass-structural risk tradeoff. The over-cleaning might introduce a sharp likelihood surface that’s not robust enough when that data are shuffled by resampling. I’ve seen this with pre-processing in GJR-GARCH. Ironically, even if this were quickly passed, the breakage of the model could slow or stop progress during a regime shift down the road.

No posts

Ready for more?