The Simple Implement of BatchNorm2D

Posted by yaohong on Thursday, May 27, 2021

TOC

The Simple Implement of BatchNorm2D

The first is that instead of whiteningthe features in layer inputs and outputs jointly, we will normalize each scalar feature independently, by making ithave the mean of zero and the variance of 1. For a layer with d-dimensional inputx = (x(1). . . x(d)), we will nor-malize each dimension

1.MyBatchNorm2D

import numpy as np;
class MyBatchNorm2D:

    def __init__(self):
        pass

    def forward(self, x):
        x = np.array(x);
        mean = np.mean(x);
        standard_deviation = np.sqrt(np.var(x) + 1e-05);
        x_norm = (x - mean) / standard_deviation;
        return x_norm;

input = [[[[ 1.1713, -10.7508],
          [-2.0155, -0.5290],
          [-0.2751,  1.0233]],
         [[-1.4446, -0.8337],
          [-1.0429, -0.8856],
          [ 5.3324,  7.6233]]],
        [[[ 2.1079,  1.6039],
          [-0.8938,  1.1655],
          [ 8.0355, -0.4911]],
         [[ 3.6337,  10.3400],
          [-1.5365,  0.7931],
          [ 0.8472,  1.1318]]]];
x = np.array(input);
bn = MyBatchNorm2D();
x_norm = bn.forward(input);
print("x_norm:", x_norm);

print("np.mean: ", np.mean(np.array(input)));
print("np.var: " , np.var(np.array(input)));
print("MyBatchNorm2D np.mean: ", np.mean(np.array(x_norm)));
print("MyBatchNorm2D np.var: " , np.var(np.array(x_norm)));

# OUTPUT:
# x_norm: [[[[ 0.0414345  -2.92181622]
#           [-0.75064805 -0.38117689]
#           [-0.31806977  0.00464894]]
#          [[-0.60875025 -0.4569104 ]
#           [-0.50890728 -0.4698102 ]
#           [ 1.07568036  1.64508601]]]
#         [[[ 0.27422743  0.14895769]
#           [-0.47184832  0.0399929 ]
#           [ 1.74753876 -0.3717568 ]]
#          [[ 0.65346666  2.3203247 ]
#           [-0.63159209 -0.05256752]
#           [-0.0391209   0.03161673]]]]
# np.mean:  1.0045958333333334
# np.var:  16.18707780123264
# MyBatchNorm2D np.mean:  0.0
# MyBatchNorm2D np.var:  0.9999993822236513

2.Using BatchNorm2d in torch

input = [[[[ 1.1713, -10.7508],
          [-2.0155, -0.5290],
          [-0.2751,  1.0233]],
         [[-1.4446, -0.8337],
          [-1.0429, -0.8856],
          [ 5.3324,  7.6233]]],
        [[[ 2.1079,  1.6039],
          [-0.8938,  1.1655],
          [ 8.0355, -0.4911]],
         [[ 3.6337,  10.3400],
          [-1.5365,  0.7931],
          [ 0.8472,  1.1318]]]];

import torch
import torch.nn as nn
input = torch.tensor(input);
bn = nn.BatchNorm2d(2, momentum=None, affine=False, track_running_stats=None)
x_norm = bn(input)
print("BatchNorm2d new_x:", x_norm);

import numpy as np;
print("BatchNorm2d np.mean: " , np.mean(np.array(x_norm)));
print("BatchNorm2d np.var: " , np.var(np.array(x_norm)));

# OUTPUT:
# BatchNorm2d new_x: tensor([[[[ 0.2864, -2.6606],
#                               [-0.5013, -0.1339],
#                               [-0.0711,  0.2498]],
#                              [[-0.9184, -0.7553],
#                               [-0.8112, -0.7692],
#                               [ 0.8903,  1.5017]]],
#                             [[[ 0.5179,  0.3933],
#                               [-0.2241,  0.2850],
#                               [ 1.9831, -0.1245]],
#                              [[ 0.4369,  2.2267],
#                               [-0.9429, -0.3212],
#                               [-0.3067, -0.2308]]]])
# BatchNorm2d np.mean:  -9.934108e-09
# BatchNorm2d np.var:  0.99999934

REFERENCE:

1.Torch nn.BatchNorm2d

2.Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

「点个赞」

Yaohong

点个赞

使用微信扫描二维码完成支付