Yaohong

为了真相不惜被羞辱

How backward and step are associated with model paramters update?

How are backward and step associated with model paramters update? optimizer accept the paramters of the model, it can update the parameters, but how is loss function associated with paramters? loss.backward() optimizer.step() REFERENCE: 1.pytorch - connection between loss.backward() and optimizer.step() 2.https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html#forward-and-backward-function-hooks

nn_Module

nn_Module 1.Where are module parameters configured? The parameters are stored in the network node which is one of points of a network layer. Neural network layer is defined in init method of module and need to be defined as class variable; import torch.nn as nn import numpy as np class TorchDNN(nn.Module): def __init__(self, input, hidden, output): super(TorchDNN, self).__init__(); layer_hidden = nn.Linear(input, hidden, bias = True); def forward(self, input_data): pass x = np.

Understanding arange, unsqueeze, repeat, stack methods in Pytorch

Understanding arange, unsqueeze, repeat, stack methods in Pytorch torch.arange(start=0, end, step=1) return 1-D tensor of size (end-start)/step which value begin from start and each value take with common differences step. torch.unsqueeze(input, dim) return a new tensor with a dimension of size one insterted at specified position; A dim value within the range [-input.dim() - 1, input.dim() + 1) can be used. tensor.repeat(size*) return a tensor; the new shape of tensor is that original shape multiplied by arguments correspondingly, if the number of paramter don’t match the original shape, then last dimension of new shape = the last dimension of original shape * last paramter;

L1 L2 Regularization - Optimizer

Optimizer: L1 L2 Regularization L1,L2 Loss function mean different type of loss function. L1: sum(Y-f(x)) lasso L2: sum(Y-f(x))^2 Ridge L1, L2 regularization : Y_predict = E(w_i(x_i)+b_i) MES = E(Y-Y_predict)^2 L1: loss = MSE + 入E|w_i| L2: loss = MES + 入E(w_i)^2 What does penalize the weights? It means add another parameters to the loss function, so

How to Label Voice with Praat for Machine Learning

Praat

How to Label Voice with Praat for Machine Learning 1.Install 1.1 Download praat 1.Open Praat: doing Phonetics by Computer website; 2.Choose your OS system on download area in the upper left conner of website; 3.Then click the praat6150_mac.dmg or praat6150_win64.zip to download file; For example, my os is MacOS, in my case I should download praat6150_mac.dmg and install it. Option: You can also download the file from github, referce to Praat in github 1.

Anacode simple usage

Anacode simple usage 1.1 Download and install –Mac os Download file: click to download Install after download. Run command in terminal to see your anconda version: $conda -V conda 4.10.1 Use conda info to see conda configuration: (base) $ conda info 2.Anaconda Usage 2.1 List all enviroments (base) $ conda info -e # conda environments: # base /Users/Rhys/opt/anaconda3 2.1 create an enviroment (base) $ conda create -n py36 python=3.6 2.2 activate an enviroment (base) $ conda activate py36 (py36) $ The environment had changed after activating;

Simple AI expert Enhanced Loop

Simple AI expert Enhanced Loop Habit: Daily plan, weekly plan, month plan, 10 minute reading, Daily self-examination Loop1: Assumption->design a experiment->do->feedback->conclusion Loop2: Choose a subject->Weekly Share to my classmates->Feedback and update -> Make another share;

The Simple Implement of BatchNorm2D

The Simple Implement of BatchNorm2D The first is that instead of whiteningthe features in layer inputs and outputs jointly, we will normalize each scalar feature independently, by making ithave the mean of zero and the variance of 1. For a layer with d-dimensional inputx = (x(1). . . x(d)), we will nor-malize each dimension 1.MyBatchNorm2D import numpy as np; class MyBatchNorm2D: def __init__(self): pass def forward(self, x): x = np.array(x); mean = np.

model(x) vs model.forward(x)

model(x) vs model.forward(x) __call__ magic method in nn.Module will invoke forward() method and take care of hooks and states that python allows, so we should use model(x) rather than call model.forward(x) directly. REFERENCE: 1.Why there are different output between model.forward(input) and model(input) 2.Calling forward function without .forward() 3.torch.nn.module codes

DNN RNN CNN codes

Simple DNN RNN CNN example codes 1.DNN-Deep neural network import numpy as np; class myDNN: # 3 * 5 * 2 def __init__(self, input, hidden, output): # hidden random weight # Note: hidden_weight can be the shape of (input,hidden); correspondingly, `self.hidden_out` should equal `np.dot(input_data, self.hidden_weight)` to accord with hidden_weight shape. self.hidden_weight = np.random.rand(hidden, input); self.hidden_bias = np.random.rand(hidden); # hidden random weight self.output_weight = np.random.rand(output,hidden); self.output_bias = np.random.rand(output); # def forward(self, input_data): self.

Use Opencv stitching_detailed To Stitch Segmentation image

Use Opencv stitching_detailed To Stitch Segmentation image Environment: python version 3.7 opencv-python version 4.5.1.48 1.Download stitching_detailed file save it with file name stitching_detailed.py; 2.Run the command; $python.exe stitching_detailed.py image_1.png image_2.png image_3.png image_4.png image_5.png image_6.png image_7.png image_8.png image_9.png origin.png --features=brisk --matcher=affine Note that: image 1~9 is Segmentation images and origin.png is a full picture. REFERENCE: stitching_detailed stitching_detailed.py source codes is follow: """ Stitching sample (advanced) =========================== Show how to use Stitcher API from python.

How does warp Perspective work?

How does warp Perspective work? 1.warp perspective with cv2 if __name__ == "__main__": # coordinate: (y,x), left_top, right_rop, left_bottom, right_bottom src = np.float32([[20.0, 0.0], [20.0 ,315.0], [186.0, 17.2], [181.0, 299.0]]) dst = np.float32([[0.0, 0.0], [0.0, 315.0], [202.0, 7.0], [200.0, 306.0]]) # load image warp_img = cv2.imread("./my_wide_angle_orig.jpg") warp_img = cv2.cvtColor(warp_img, cv2.COLOR_BGR2RGB) print("warp_img: ",warp_img.shape) # (638, 958, 3) width = int(warp_img.shape[1]/3) height = int(warp_img.shape[0]/3) warp_img = cv2.resize(warp_img, (width,height), interpolation=cv2.INTER_LINEAR) print("warp_img.shape:",warp_img.shape) # (212, 319, 3) ## orig image plt.

Averaging histograms

Averaging histograms An image histogram is the number of each pixel value, which is displayed in the graph. x axis of the graph is pixel value, range from 0 to 255; y axis of the graph is the number of this pixel value; 1.How to averaging histograms? Our goal is to generate a new image with a more even histogram distribution. 1.1 An equation The accumulated value of the histogram

How Keras add two layers?

How Keras add two layers? tf.keras.layers.add() method can add two layer? What it do is sum the values of corresponding positions in two layers. For example: input_shape = (1,2,3) import tensorflow as tf tf.enable_eager_execution() print("----------x1 tensor-----------") x1 = tf.random.uniform(input_shape, maxval=10, dtype=tf.dtypes.int32) tf.print(x1); print("----------x2 tensor-----------") x2 = tf.random.uniform(input_shape, maxval=10, dtype=tf.dtypes.int32) tf.print(x2); print("----------add 2 tensors-----------") y = tf.keras.layers.add([x1,x2]) tf.print(y); Output: ———-x1 tensor———– [[[7 6 1] [5 7 2]]] ———-x2 tensor———– [[[0 7 8] [2 9 6]]] ———-add 2 tensors———– [[[7 13 9] [7 16 8]]]

如何计算RNN和LSTM的参数数量?

如何计算RNN和LSTM的参数数量? Environment: python version: 3.7.4 pip version: 19.0.3 numpy version:1.19.4 matplotlib version:3.3.3 tensorflow version:1.14.0 keras version:2.1.5 代码如下: from keras.layers import SimpleRNN from keras.models import Model from keras import Input inputs = Input((None, 5)) simple_rnn = SimpleRNN(4) output = simple_rnn(inputs) # The output has shape `[32, 4]`. model = Model(inputs,output)

创建一个简单的RNN网络

创建一个简单的RNN网络 Environment: python version: 3.7.4 pip version: 19.0.3 numpy version:1.19.4 matplotlib version:3.3.3 tensorflow version:1.14.0 keras version:2.1.5 代码如下: import keras from keras import backend as K from keras.layers import RNN class MinimalRNNCell(keras.layers.Layer): def __init__(self, units,use_bias = True, **kwargs): self.units = units self.state_size = units self.use_bias = use_bias super(MinimalRNNCell, self).__init__(**kwargs) def build(self, input_shape): self.kernel = self.add_weight(shape=(input_shape[-1], self.units),

如何计算一个BatchNormalization的参数?

Batch参数=前一层卷积数量x4

如何计算一个BatchNormalization的参数? # Environment: # OS macOS Catalina 10.15.6 # python 3.7 # pip 20.1.1 # tensorflow 1.14.0 # Keras 2.1.5 from keras.models import Sequential from keras.layers import Conv2D,BatchNormalization model = Sequential(); #

双线插值是什么?

邻近四个点,插入点距哪个点近,该点对插值的影响更大。

双线插值是什么? 图像处理中,有时我们需要放大图片,比如原来图片宽高是300*300px,如果要在500*500的屏幕上展示,这时一种方法就是

如何计算一个卷积层的参数?

卷积层参数=(卷积长x卷积宽x卷积通道数+1)x卷积深度

如何计算一个卷积参数? 在计算卷积参数前,我们先来看几个小问题。 1.怎么判断一个卷积核有多少通道? 一个卷积核的通道数=前一层的通道数; 比如前一

如何计算全连接神经元参数

当前层的全连接参数=上一层神经元数x当前神经元数+当前神经偏移值数量

如何计算全连接神经元参数? 模型源码为: from tensorflow.keras import models from tensorflow.keras import layers network = models.Sequential() network.add(layers.Dense(504, activation='relu', input_shape=(504,))) network.add(layers.Dense(11, activation='softmax')) network.summary() 输出: Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 504) 254520 _________________________________________________________________ dense_1 (Dense) (None, 11) 5555 ================================================================= Total params: 260,075 Trainable params: 260,075 Non-trainable params: 0