ARIMA modeling has felt like a bit of a nemesis since I was first exposed to it.
Figuring out what settings to use with it has been my main challenge. The ACF and PACF plots are supposed to help here, but you have to learn to read them somewhat.
There is an auto.arima option, which tests many variations and is the quickest way to get to the best answer. But of course applying an automatic model without understanding the logic doesn’t satisfy my drive to understand the how AND the why.
Today I was studying some text on ARIMA modeling when a realisation suddenly hit me; I’ve used my own version of a seasonal ARMA model before.
In my last job before this one, data that I needed each week for a report was often late. Our accepted solution had been to use a moving average from the previous 6 weeks as a predictor and adjust the data retroactively, as this prediction was rarely off. However, when it was off, it was off by a lot, and I felt there must be a better way.
Through experimentation I settled on using the moving average from the previous year, the moving average from the current year and the latest figure as predictors.
I didn’t know at the time, but from what I’ve been reading lately, it seems that I had stumbled into developing a seasonal auto-regressive moving average model.
This is exactly the kind of learning I’ve been looking for, and I’ve already had it!
A rock-solid application of the logic behind time series forecasting with first-hand exposure to how it works.
I’m still having trouble with the ACF and PACF plots, but I’m much more excited about time-series forecasting now. I’ve done it before and I’ll do it again!
I wonder what else I’ve learned by myself without realising?