We propose a novel recurrent network based HDR deghosting method for fusing arbitrary length dynamic sequences. The proposed method uses a combination of convolutional and recurrent architectures to generate visually pleasing HDR images free of ghosting artifacts. We introduce a new recurrent cell architecture, namely Self-Gated Memory (SGM) cell, that outperforms the standard LSTM cell while containing fewer parameters and having faster running times. In SGM cell, the information flow through a gate is controlled by multiplying the output of the gate by function of itself. Additionally, we use two SGM cells in bidirectional setting to improve the output quality. The proposed approach achieves state-of-the-art performance compared to existing HDR deghosting methods quantitatively across three publicly available datasets, while simultaneously removing the restriction of having to train on a fixed number of input images. Through extensive ablations, we demonstrate the importance of individual components in our proposed approach.