Skip to main content

How to conduct Hypothesis Testing step by step - simple and elegant (part 3)

Step by Step procedure in Conducting the Hypothesis Testing: prerequisites:   Part 1:  What is Hypothesis Testing and When it is Used? Part 2:  How to decide Null and Alternate Hypothesis?                    https://www.isixsigma.com/   Before diving into the steps, first let’s understand the Important Terminology we need to understand: Null Hypothesis: It is a well-established fact which is believed by all the people. It can also defined as an Assumption that treats everything equally and similarly. Alternate Hypothesis: It is a new claim made against the Null Hypothesis. If we have enough evidence against the Null Hypothesis we will reject Null Hypothesis. P-value: Probability of Null Hypothesis being true. Significance level: probability of rejecting the Null Hypothesis when it is true. It is a critical point where we decide whether the Null Hypothesis is rejected or not. ...

Simple Understanding of VGG16 and it's Architecture (part 2)

Vgg16 Architecture:

 

Prerequisites: 

ALEXNet : ALEXNet Architecture

 

* vgg refers to visual geometry group who has developed vgg16 architecture.

* 16 refers to number of layers.

VGG16 or VGGNet is a convolution neural network and it is a simplified and better version of Alexnet.

Remembering Alexnet architecture is very difficult, Later on 2014, vgg16 came as an architecture and it is very simple to remember.

VGG16 is very simple where: 

·  It is built on Convolution operations with fixed kernel_size=(3x3), padding=’same’, and stride=1 for all Convolution layers.

·   And also Maxpooling operation with pool_size=(2x2) and stride=2 for all Maxpool layers. 

     Let's see the architecture of vgg16

source:https://qph.fs.quoracdn.net/main-qimg-e657c195fc2696c7d5fc0b1e3682fde6

If you observe the figure, As said all convolution layers have 3x3 kernels and all maxpool layers have size of 2x2.

Note 1: For maxpool layers the padding=‘valid’ which means there is no padding, so the matrix size will changes.

Note 2: For convolution layers the padding=‘same’ which means there is padding operation, so the matrix size will remain same.

Now let’s see the architecture in detail

Source: https://neurohive.io/wp-content/uploads/2018/11/vgg16-1-e1542731207177.png

In the above figure, If you observe at convolution operation there is no change in tensor size but at maxpool operation there is a change as said in Note1 and Note2.


There are two variants in this type of CNN, They are VGG16 and VGG19, Both are different in number of layers, But the accuracy is almost same.


Advantages:

It gives accuracy better than Alexnet.


Disadvantages:

The VGGNet has more layers than AlexNet so as more parameters to train.


Following Blogs might be useful for you.

INCEPTION_V3:     INCEPTION V3

RESNET:                   RESNET ARCHITECTURE



If you have any queries, please comment below...

For more content please follow the blogpost...


Comments

Popular posts from this blog

Simple Understanding of ALEXNET and its Architecture (part 1)

Alexnet is a state of art convolution neural network, because it has outperformed all the algorithms in terms of accuracy in ImageNet competition around 2012. ImageNet competition is a challenge where a dataset is provided with 1000 categories with millions of images to be trained for image recognition.   ALEXNET ARCHITECTURE:                                                                     Source: https://i0.wp.com/ramok.tech/wp-content/uploads/2017/12/2017-12-31_01h31_40.jpg   In the whole architecture, There are only convolution and max pooling operations followed by fully connected layers (yellow color) at the end, for classification. For simple and fast understanding of the architecture you need to remember a formula: [ (n-k+2p)/s ]+1 n -> size of image k -> kernel size ...

Linear Regression - SIMPLIFIED

It is the basic algorithm at which everybody would like to start their learning in Data Science.             Now, what exactly the Linear Regression is 1.  Linear Regression is the supervised learning algorithm where it’s main aim is to find the line that best fits the given data. 2. Here ‘Fitting the best line for given data’ means finding the relation between dependent and independent variables present in the data.   Note 1: you need to use Linear regression only when your dependent and independent variables have linear relationship. Note 2: Here Independent variables can be both discreet or continuous data, but dependent variables should be continuous data. Ok, Let me explain with good example,                                                                    ...