Yaohong

为了真相不惜被羞辱

Rhythmically Intermittent When my Iphone 12 Connect To MacOS

解决iphone连接到macOS时,反复的断开重连

Rhythmically Intermittent When my Iphone 12 Connect To MacOS Recently, when my iPhone connects to MacOS, the battery icon on the phone alternates between showing the charging (lightning bolt icon) for a few seconds and then disappearing (no charging battery icon). This continuous loop prevents the phone from charging through MacOS and also hinders the ability to debug iPhone apps. Mac system version with the issue: macOS Monterey Version 12.7.3

Xcode or Android Studio is unable to list my iPhone device on MacOS

Xcode or Android Studio is unable to list my iPhone device I encountered an issue where my iPhone12 device was not listed in Android Studio while I was developing. The issue was dispeared after I restart my macOS, but it occurred again when I reopened my MacBook Pro from sleep mode. I noticed a new process running in Activity Monitor during the issue occurred. Here is the process list while

Distribution failed with errors When developing ios App

Distribution failed with errors: Asset validation failed The product archive is invalid. The Info.plist must contain a LSApplicationCategoryType key, whose value is the UTI for a valid category. For more details, see "Submitting your Mac apps to the App Store". (ID: 67f59c1b-bb08-4694-978f-11d07ff31357) Solve this issue by add following codes on <project root folder>/macos/Runner/Info.plist: <key>LSApplicationCategoryType</key> <string>public.app-category.productivity</string> the value of LSApplicationCategoryType refer https://developer.apple.com/documentation/bundleresources/information_property_list/lsapplicationcategorytype

Flutter HTTP host maven.google.com is not reachable in Windows

Flutter HTTP host https://maven.google.com/ is not reachable in Windows 10 It shows the following error messages after executing flutter doctor in terminal prompt in Windows 10. Doctor summary (to see all details, run flutter doctor -v): [√] Flutter (Channel stable, 2.10.3, on Microsoft Windows [Version 10.0.19044.1586], locale zh-CN) [√] Android toolchain - develop for Android devices (Android SDK version 30.0.3) [√] Chrome - develop

How backward and step are associated with model paramters update?

How are backward and step associated with model paramters update? optimizer accept the paramters of the model, it can update the parameters, but how is loss function associated with paramters? loss.backward() optimizer.step() REFERENCE: 1.pytorch - connection between loss.backward() and optimizer.step() 2.https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html#forward-and-backward-function-hooks

nn_Module

nn_Module 1.Where are module parameters configured? The parameters are stored in the network node which is one of points of a network layer. Neural network layer is defined in init method of module and need to be defined as class variable; import torch.nn as nn import numpy as np class TorchDNN(nn.Module): def __init__(self, input, hidden, output): super(TorchDNN, self).__init__(); layer_hidden = nn.Linear(input, hidden, bias = True); def forward(self, input_data): pass x = np.

Github ssh key didn't work

Check the git URL configuration of your project

Github ssh key didn’t work After you config a ssh key on github, you can test your ssh connection. 1.Open Terminal. 2.Enter the following: $ ssh -T git@github.com If you see your username of github in the resulting message, that means you configure successfully. But, when you run git pull under the root of the project, the it prompt you to enter your Username of github, what is wrong?

Understanding arange, unsqueeze, repeat, stack methods in Pytorch

Understanding arange, unsqueeze, repeat, stack methods in Pytorch torch.arange(start=0, end, step=1) return 1-D tensor of size (end-start)/step which value begin from start and each value take with common differences step. torch.unsqueeze(input, dim) return a new tensor with a dimension of size one insterted at specified position; A dim value within the range [-input.dim() - 1, input.dim() + 1) can be used. tensor.repeat(size*) return a tensor; the new shape of tensor is that original shape multiplied by arguments correspondingly, if the number of paramter don’t match the original shape, then last dimension of new shape = the last dimension of original shape * last paramter;

L1 L2 Regularization - Optimizer

Optimizer: L1 L2 Regularization L1,L2 Loss function mean different type of loss function. L1: sum(Y-f(x)) lasso L2: sum(Y-f(x))^2 Ridge L1, L2 regularization : Y_predict = E(w_i(x_i)+b_i) MES = E(Y-Y_predict)^2 L1: loss = MSE + 入E|w_i| L2: loss = MES + 入E(w_i)^2 What does penalize the weights? It means add another parameters to the loss function, so

How to Label Voice with Praat for Machine Learning

Praat

How to Label Voice with Praat for Machine Learning 1.Install 1.1 Download praat 1.Open Praat: doing Phonetics by Computer website; 2.Choose your OS system on download area in the upper left conner of website; 3.Then click the praat6150_mac.dmg or praat6150_win64.zip to download file; For example, my os is MacOS, in my case I should download praat6150_mac.dmg and install it. Option: You can also download the file from github, referce to Praat in github 1.

Github API Basic Authentication Example

Github API Basic Authentication Example 1.Generate Personal access tokens Open this page Generate Personal access tokens and click Generate new token to get a token; 2.Use access token to request Github REST api 2.1 Install requests with pip; pip install requests 2.2 Sustitute GITHUB_API_USER_NAME with your user name and GITHUB_API_PERSONAL_TOKEN with the token you got in step one, then run the following code; from requests import Request, Session from requests.exceptions import ConnectionError, Timeout, TooManyRedirects def getRateLimit(): url = 'https://api.

Anacode simple usage

Anacode simple usage 1.1 Download and install –Mac os Download file: click to download Install after download. Run command in terminal to see your anconda version: $conda -V conda 4.10.1 Use conda info to see conda configuration: (base) $ conda info 2.Anaconda Usage 2.1 List all enviroments (base) $ conda info -e # conda environments: # base /Users/Rhys/opt/anaconda3 2.1 create an enviroment (base) $ conda create -n py36 python=3.6 2.2 activate an enviroment (base) $ conda activate py36 (py36) $ The environment had changed after activating;

Simple AI expert Enhanced Loop

Simple AI expert Enhanced Loop Habit: Daily plan, weekly plan, month plan, 10 minute reading, Daily self-examination Loop1: Assumption->design a experiment->do->feedback->conclusion Loop2: Choose a subject->Weekly Share to my classmates->Feedback and update -> Make another share;

The form of our body in future

The form of our body in future 1.The human body consumes energy in the form of carbohydrate. 2.The human body consumes energy in the form of carbohydrate and electric, part of our body are machine. 3.The human body consumes energy in the form of nuclear in long long future, and we can calculate just like we are a super computer .

The Simple Implement of BatchNorm2D

The Simple Implement of BatchNorm2D The first is that instead of whiteningthe features in layer inputs and outputs jointly, we will normalize each scalar feature independently, by making ithave the mean of zero and the variance of 1. For a layer with d-dimensional inputx = (x(1). . . x(d)), we will nor-malize each dimension 1.MyBatchNorm2D import numpy as np; class MyBatchNorm2D: def __init__(self): pass def forward(self, x): x = np.array(x); mean = np.

model(x) vs model.forward(x)

model(x) vs model.forward(x) __call__ magic method in nn.Module will invoke forward() method and take care of hooks and states that python allows, so we should use model(x) rather than call model.forward(x) directly. REFERENCE: 1.Why there are different output between model.forward(input) and model(input) 2.Calling forward function without .forward() 3.torch.nn.module codes

How to extract knowledge?

How to extract knowledge? I held the opinion before that to do knowledge extraction can be done only by summarizing, but it is not a good way. In contrast, put the knowledge back to a concrete scenario will work for user understandings. I used to thought common truths are the most valuable of knowledge, but perhaps we need the knowledge with personal experiences. Because it contains the problem that the author face and the way how one to think, while summary knowledge is only the result of thinking.

How to face the investment risk?

How to face the investment risk? Investment risk is the expectation loss of our investment. It is difficult to calculate the probability of loss sometimes, but we can assume that the loss have happened, then try to find out the reason of loss. Before investing, we can try to answer the following two questions: 1.What cause the price of properties fall 50%? Liquidity risk Credit risk

DNN RNN CNN codes

Simple DNN RNN CNN example codes 1.DNN-Deep neural network import numpy as np; class myDNN: # 3 * 5 * 2 def __init__(self, input, hidden, output): # hidden random weight # Note: hidden_weight can be the shape of (input,hidden); correspondingly, `self.hidden_out` should equal `np.dot(input_data, self.hidden_weight)` to accord with hidden_weight shape. self.hidden_weight = np.random.rand(hidden, input); self.hidden_bias = np.random.rand(hidden); # hidden random weight self.output_weight = np.random.rand(output,hidden); self.output_bias = np.random.rand(output); # def forward(self, input_data): self.

Use Opencv stitching_detailed To Stitch Segmentation image

Use Opencv stitching_detailed To Stitch Segmentation image Environment: python version 3.7 opencv-python version 4.5.1.48 1.Download stitching_detailed file save it with file name stitching_detailed.py; 2.Run the command; $python.exe stitching_detailed.py image_1.png image_2.png image_3.png image_4.png image_5.png image_6.png image_7.png image_8.png image_9.png origin.png --features=brisk --matcher=affine Note that: image 1~9 is Segmentation images and origin.png is a full picture. REFERENCE: stitching_detailed stitching_detailed.py source codes is follow: """ Stitching sample (advanced) =========================== Show how to use Stitcher API from python.