Yaohong

为了真相不惜被羞辱

How backward and step are associated with model paramters update?

How are backward and step associated with model paramters update? optimizer accept the paramters of the model, it can update the parameters, but how is loss function associated with paramters? loss.backward() optimizer.step() REFERENCE: 1.pytorch - connection between loss.backward() and optimizer.step() 2.https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html#forward-and-backward-function-hooks

nn_Module

nn_Module 1.Where are module parameters configured? The parameters are stored in the network node which is one of points of a network layer. Neural network layer is defined in init method of module and need to be defined as class variable; import torch.nn as nn import numpy as np class TorchDNN(nn.Module): def __init__(self, input, hidden, output): super(TorchDNN, self).__init__(); layer_hidden = nn.Linear(input, hidden, bias = True); def forward(self, input_data): pass x = np.

Understanding arange, unsqueeze, repeat, stack methods in Pytorch

Understanding arange, unsqueeze, repeat, stack methods in Pytorch torch.arange(start=0, end, step=1) return 1-D tensor of size (end-start)/step which value begin from start and each value take with common differences step. torch.unsqueeze(input, dim) return a new tensor with a dimension of size one insterted at specified position; A dim value within the range [-input.dim() - 1, input.dim() + 1) can be used. tensor.repeat(size*) return a tensor; the new shape of tensor is that original shape multiplied by arguments correspondingly, if the number of paramter don’t match the original shape, then last dimension of new shape = the last dimension of original shape * last paramter;

L1 L2 Regularization - Optimizer

Optimizer: L1 L2 Regularization L1,L2 Loss function mean different type of loss function. L1: sum(Y-f(x)) lasso L2: sum(Y-f(x))^2 Ridge L1, L2 regularization : Y_predict = E(w_i(x_i)+b_i) MES = E(Y-Y_predict)^2 L1: loss = MSE + 入E|w_i| L2: loss = MES + 入E(w_i)^2 What does penalize the weights? It means add another parameters to the loss function, so

How to Label Voice with Praat for Machine Learning

Praat

How to Label Voice with Praat for Machine Learning 1.Install 1.1 Download praat 1.Open Praat: doing Phonetics by Computer website; 2.Choose your OS system on download area in the upper left conner of website; 3.Then click the praat6150_mac.dmg or praat6150_win64.zip to download file; For example, my os is MacOS, in my case I should download praat6150_mac.dmg and install it. Option: You can also download the file from github, referce to Praat in github 1.

The Simple Implement of BatchNorm2D

The Simple Implement of BatchNorm2D The first is that instead of whiteningthe features in layer inputs and outputs jointly, we will normalize each scalar feature independently, by making ithave the mean of zero and the variance of 1. For a layer with d-dimensional inputx = (x(1). . . x(d)), we will nor-malize each dimension 1.MyBatchNorm2D import numpy as np; class MyBatchNorm2D: def __init__(self): pass def forward(self, x): x = np.array(x); mean = np.

model(x) vs model.forward(x)

model(x) vs model.forward(x) __call__ magic method in nn.Module will invoke forward() method and take care of hooks and states that python allows, so we should use model(x) rather than call model.forward(x) directly. REFERENCE: 1.Why there are different output between model.forward(input) and model(input) 2.Calling forward function without .forward() 3.torch.nn.module codes

DNN RNN CNN codes

Simple DNN RNN CNN example codes 1.DNN-Deep neural network import numpy as np; class myDNN: # 3 * 5 * 2 def __init__(self, input, hidden, output): # hidden random weight # Note: hidden_weight can be the shape of (input,hidden); correspondingly, `self.hidden_out` should equal `np.dot(input_data, self.hidden_weight)` to accord with hidden_weight shape. self.hidden_weight = np.random.rand(hidden, input); self.hidden_bias = np.random.rand(hidden); # hidden random weight self.output_weight = np.random.rand(output,hidden); self.output_bias = np.random.rand(output); # def forward(self, input_data): self.

Averaging histograms

Averaging histograms An image histogram is the number of each pixel value, which is displayed in the graph. x axis of the graph is pixel value, range from 0 to 255; y axis of the graph is the number of this pixel value; 1.How to averaging histograms? Our goal is to generate a new image with a more even histogram distribution. 1.1 An equation The accumulated value of the histogram

Some notes on Convolution Course

Some notes on Convolution course What is padding? Padding is to add some pixels to the border of the original image, such as a 6*6 image will become a 8*8 image if we add a pixel to its border. valid convolution vs same convolution. Valid convolution is on padding that means the actual pixels of the output image after we convole original image with filter. Same convolution means adding padding so that the output image has the same size as its input image.

深度学习-第一个数字识别项目

深度学习的步骤过程

深度学习-第一个数字识别项目 今天按Google官方推荐流程,整理了开发模板,不是所有深度学习都严格按这个模板来实现,不同项目步骤有所删减,但

Terms in machine learning

Terms in machine learning FLOPS FLOPS=Floating point operation per seconds FLOPs=Floating point operations REFERNECD: what-is-flops-in-field-of-deep-learning