Skip to main content

How to conduct Hypothesis Testing step by step - simple and elegant (part 3)

Step by Step procedure in Conducting the Hypothesis Testing: prerequisites:   Part 1:  What is Hypothesis Testing and When it is Used? Part 2:  How to decide Null and Alternate Hypothesis?                    https://www.isixsigma.com/   Before diving into the steps, first let’s understand the Important Terminology we need to understand: Null Hypothesis: It is a well-established fact which is believed by all the people. It can also defined as an Assumption that treats everything equally and similarly. Alternate Hypothesis: It is a new claim made against the Null Hypothesis. If we have enough evidence against the Null Hypothesis we will reject Null Hypothesis. P-value: Probability of Null Hypothesis being true. Significance level: probability of rejecting the Null Hypothesis when it is true. It is a critical point where we decide whether the Null Hypothesis is rejected or not. ...

Simple Understanding of ALEXNET and its Architecture (part 1)

Alexnet is a state of art convolution neural network, because it has outperformed all the algorithms in terms of accuracy in ImageNet competition around 2012.

ImageNet competition is a challenge where a dataset is provided with 1000 categories with millions of images to be trained for image recognition.

 

ALEXNET ARCHITECTURE:

 

                                                                 Source: https://i0.wp.com/ramok.tech/wp-content/uploads/2017/12/2017-12-31_01h31_40.jpg

 

In the whole architecture, There are only convolution and max pooling operations followed by fully connected layers (yellow color) at the end, for classification.

For simple and fast understanding of the architecture you need to remember a formula:

[(n-k+2p)/s]+1

n -> size of image

k -> kernel size

p -> padding

s -> stride

·     Now if we observe the architecture, we are passing 224x224x3 image as an input to the first layer. 

   Lets simply understand what happening at first layer, so that you can easily analyze for other layers: 

·    In the first layer it is showing convolution operation with kernel size (11x11) and with stride (s=4).

Note 1: You need to remember that 96 such kernels are using for convolution operations. So, they represented it as 11x11x96.

Note 2: If padding is 'same', it means there is a padding operation otherwise there is no padding.

Now, Lets take our formula,

[(n-k+2p)/s]+1

Substitute n=224, k=11, s=4, p=0.

[(224-11)/4] +1 = [213/4] +1 = [53.15] +1 = 54+1 = 55

So the output we get is 55x55, since they applied 96 kernels, the resultant output we get is size of 55x55x96.

Similarly, The above math is applicable for all the layers both convolution and max pooling, Please try it on your own.

 

è In our figure, At the end of the blue color region the result should flattened and fed it into hidden layers.

è Till blue color, It is extracting image features and after that it is doing image classification.

 

Reasons Alexnet has outperformed all the algorithms till 2012:

1.  In Alexnet, They have used advanced concepts like Data Augmentation, Dropout layer, Relu Activation unit, Local Response Normalization (Normalizing all channels corresponding to a pixel instead of normalizing the whole tensor).

2.     The whole architecture has built on GPU’s.

 

Disadvantages:

1.  At every layer, They have used different kernel sizes like 11x11,5x5,3x3 with different padding and strides.

2.    Difficult to remember the architecture.


Following blogs might be useful for you.

VGG16:                     VGG16 ARCHITECTURE
INCEPTION_V3:     INCEPTION V3
RESNET:                   RESNET ARCHITECTURE


If you have any queries, please comment below...
For more content please follow the blogpost...

Comments

Popular posts from this blog

How to conduct Hypothesis Testing step by step - simple and elegant (part 3)

Step by Step procedure in Conducting the Hypothesis Testing: prerequisites:   Part 1:  What is Hypothesis Testing and When it is Used? Part 2:  How to decide Null and Alternate Hypothesis?                    https://www.isixsigma.com/   Before diving into the steps, first let’s understand the Important Terminology we need to understand: Null Hypothesis: It is a well-established fact which is believed by all the people. It can also defined as an Assumption that treats everything equally and similarly. Alternate Hypothesis: It is a new claim made against the Null Hypothesis. If we have enough evidence against the Null Hypothesis we will reject Null Hypothesis. P-value: Probability of Null Hypothesis being true. Significance level: probability of rejecting the Null Hypothesis when it is true. It is a critical point where we decide whether the Null Hypothesis is rejected or not. ...

Simple Understanding of INCEPTION_V3 and it's Architecture (part 3)

INCEPTION_V3 Prerequisites:  VGGNet OR VGG16:  VGG16 Architecture   By looking it's name, everybody think's that it is a complicated story just like the movie INCEPTION. But trust me, I will prove you that it is wrong by explaining in the most detailed way. Till now, If we take a layer in any neural network we only applied single operation like convolution or maxpooling and also with fixed kernel size for the whole layer.  But Now, The idea is, why can't we use all the operations in a single layer at a time. There comes INCEPTION_V3. Lets zoom a single layer in the inception_v3, source: It's a screenshot from AndrewNg class If you observe the above figure, convolution operation with kernel sizes 1x1,3x3,5x5 and the max-pool operation, all have applied at a time. Here comes a problem, COMPUTATION. only from single layer we are getting billions of computations. For example, lets do a simple mathematical calculation here, Note: To understand this you need to know how c...

Simple Understanding of VGG16 and it's Architecture (part 2)

Vgg16 Architecture:   Prerequisites:  ALEXNet  :  ALEXNet Architecture   * vgg refers to visual geometry group who has developed vgg16 architecture. * 16 refers to number of layers. VGG16 or VGGNet is a convolution neural network and it is a simplified and better version of Alexnet. Remembering Alexnet architecture is very difficult, Later on 2014, vgg16 came as an architecture and it is very simple to remember. VGG16 is very simple where:  ·   It is built on Convolution operations with fixed kernel_size=(3x3), padding=’same’, and stride=1 for all Convolution layers. ·     And also Maxpooling operation with pool_size=(2x2) and stride=2 for all Maxpool layers.       Let's see the architecture of vgg16 source:https://qph.fs.quoracdn.net/main-qimg-e657c195fc2696c7d5fc0b1e3682fde6 If you observe the figure, As said all convolution layers have 3x3 kernels and all maxpool layers have size of 2x2. Note 1: For...