Seq2Seq with LSTM
The sequence to sequence model originates from language translation. Our implementation adapts the model for multi-step time series forecasting. Specifically, given the input series , the model maps the input series to the output series:
where is the input history length and is the forecasting horizon.
Sequence to sequence (Seq2Seq) models consist of an encoder and a decoder. The final state of the encoder is fed as the initial state of the decoder. We can use various models tor both the encoder and decoder. This function implements a Long Short Term Memory (LSTM).