Fast Decorrelated Batch Normalization

2018 Summer Research at former Michigan Vision & Learning Lab (currently Princeton VL Lab)
Founded by Summer Undergraduate Research in Engineering program
Advised by: Jia Deng; Monitored by: Dawei Yang

Batch Normalization (BN) is able to accelerate the training of deep neural networks and improve the accuracy of trained models by normalizing inputs for any layer within mini-batches. Decorrelated Batch Normalization (DBN) is a technique which maintains the general benefits of BN and further improves some abilities of BN by not only normalizing the layer inputs but ZCA-whitening them as well. However, the algorithm of ZCA-whitening depends heavily on a matrix decomposition that runs slowly on GPU for lack of an efficient parallel routine. In this work, Fast Decorrelated Batch Normalization (Fast-DBN) speeds up DBN and retains its desirable qualities. We employ some approximation in matrix computation, which can faster whiten the layer inputs in neural networks. We implement Fast-DBN based on the PyTorch platform and write custom C++ extensions. We reproduce the experiments previously comparing BN and DBN on multilayer perceptrons and convolutional neural networks to show that Fast-DBN can produce comparable results as DBN and accelerate the training and the inference of the neural networks.

Related work