Skip to main content

Seq2Seq with LSTM

The sequence to sequence model originates from language translation. Our implementation adapts the model for multi-step time series forecasting. Specifically, given the input series x1,โ€ฆ,xtx_1, \ldots, x_{t}, the model maps the input series to the output series:

xtโˆ’p,xtโˆ’p+1,โ€ฆ,xtโˆ’1โŸถxt,xt+1,โ€ฆ,xt+hโˆ’1x_{t-p}, x_{t-p+1}, \ldots, x_{t-1} \longrightarrow x_t, x_{t+1}, \ldots, x_{t+h-1}

where pp is the input history length and hh is the forecasting horizon.

Sequence to sequence (Seq2Seq) models consist of an encoder and a decoder. The final state of the encoder is fed as the initial state of the decoder. We can use various models tor both the encoder and decoder. This function implements a Long Short Term Memory (LSTM).