Skip to main content

Posts

Showing posts from January 3, 2021

How to conduct Hypothesis Testing step by step - simple and elegant (part 3)

Step by Step procedure in Conducting the Hypothesis Testing: prerequisites:   Part 1:  What is Hypothesis Testing and When it is Used? Part 2:  How to decide Null and Alternate Hypothesis?                    https://www.isixsigma.com/   Before diving into the steps, first let’s understand the Important Terminology we need to understand: Null Hypothesis: It is a well-established fact which is believed by all the people. It can also defined as an Assumption that treats everything equally and similarly. Alternate Hypothesis: It is a new claim made against the Null Hypothesis. If we have enough evidence against the Null Hypothesis we will reject Null Hypothesis. P-value: Probability of Null Hypothesis being true. Significance level: probability of rejecting the Null Hypothesis when it is true. It is a critical point where we decide whether the Null Hypothesis is rejected or not. Generally, the significant level will be 0.05 which means a 5 percent risk while concluding

Simple Understanding of RESNET and it's Architecture (part 4)

      RESNET Architecture:                   Prerequisites:          VGGNet OR VGG16 :  VGG16 Architecture         DEVELOPED BY MICROSOFT ON 2015 PROBLEM OBSERVED: Regular Neural Networks has a problem that as number of layers increase, even after many epochs the train error and test error of 56 layered Neural Network is worse than the 20 layered Neural Network. source:https://arxiv.org/pdf/1512.03385.pdf Inorder to solve the above problem, The Microsoft team has come up with an idea known as residual unit or identity unit which have skip connection. source:https://arxiv.org/pdf/1512.03385.pdf Basic unit of ResNet Architecture. The above unit is known as either Residual unit or Identity unit. Here the key point is during regularization, if we have unimportant layers then we can avoid them with skip connection. so that there will be no impact on our network. If we observe the above figure, we have two layers, If those both layers are useless then during regularization the weight

Simple Understanding of INCEPTION_V3 and it's Architecture (part 3)

INCEPTION_V3 Prerequisites:  VGGNet OR VGG16:  VGG16 Architecture   By looking it's name, everybody think's that it is a complicated story just like the movie INCEPTION. But trust me, I will prove you that it is wrong by explaining in the most detailed way. Till now, If we take a layer in any neural network we only applied single operation like convolution or maxpooling and also with fixed kernel size for the whole layer.  But Now, The idea is, why can't we use all the operations in a single layer at a time. There comes INCEPTION_V3. Lets zoom a single layer in the inception_v3, source: It's a screenshot from AndrewNg class If you observe the above figure, convolution operation with kernel sizes 1x1,3x3,5x5 and the max-pool operation, all have applied at a time. Here comes a problem, COMPUTATION. only from single layer we are getting billions of computations. For example, lets do a simple mathematical calculation here, Note: To understand this you need to know how conv