## What is Deep Learning ?

Deep Learning is a branch in Artificial Intelligence and Machine Learning.

Its basically all about training the computer with lots and lots of data and make it do the tasks that  humans are capable of. Such as facial recognition, understanding the natural style of speaking, intuition, recognizing the bad handwriting and many such cognitive tasks.

The basis for Deep Learning  is Neural Networks (previously referred to as Artificial Neural Network).

## What is a Neural Network ?

Neural Network is a mathematical model of a human brain.

This mathematical model does not completely represent the human brain, it just mimics the way human brain works in solving some of the real world problems like recognizing a familiar face, understanding the hand written digits etc.,

Before learning the Neural Networks (Artificial Neural Networks), we shall quickly learn the structure and functionality of a Real Neural Network which is nothing but neurons or nerve cells in the human Brain.

An average human brain consists of 100 billion neurons which are connected together in a highly complex manner. There are trillions of inter connections between them.

This is a simplified structure of a neuron. Let me explain the functionality of a neuron in 3 simple steps.

1)The dendrites receives inputs from various types of sources like the skin, muscles and other neurons.

2)The cell body sums up the input signals and if the sum is more than a certain threshold, it triggers the signal through axon to synaptic terminals.

3)The synaptic terminals passes on these signals to other neurons or muscles via synapses (synapses is a point between two neurons. The exchange of chemical/electrical signals between two neurons happen in this area).

The signals near synaptic terminals are either excitatory (meaning: they can add to overall signal strength) or inhibitory (meaning: they can subtract from overall signal strength)

## Simple Artificial Neuron Model : The Mc Culloch-Pitts Model

The Mc-Culloch pitts neuron model is a simple model which mimics the structure and functionality of a neuron/nerve cell.

The neuron(blue circle) in this case is just a summation function. It sums up all the inputs multiplied by their respective weights. The output from this neuron($$V_{k}$$) is given by:

$$V_{k}=x_{0}W_{0}+x_{1}W_{1}+....+x_{n}W_{n}=\sum\limits_{j=0}^{n}x_{j}W_{j}$$

Here the first term is called the bias:

$$b_{0}=x_{0}W_{0}$$

$$x_{1},x_{2}...x_{n}$$ are  inputs

and

$$W_{1},W_{2}...W_{n}$$ are Weights.

The output from summation function $$(V_{k})$$ is then passed on to Threshold function which decides the final output $$y$$.

The output $$y$$ is 1 if input $$(V_{k})$$ is greater than or equal to Threshold value $$T$$, else it is 0.

If $$V_{k} >= T , y_{k}=1$$

else, $$y_{k}=0$$.

McCulloch-Pitts model belongs to a class called as binary threshold neurons.

There are many other types of neurons like Linear neurons, Sigmoid neurons etc., which we shall discuss later.

### So  a neural network is basically just a bunch of neurons like these connected in a rather complicated way to achieve simple tasks which only humans are capable of doing.

Here is an example of a 3 layer neural network. The input layer is not usually considered for counting the total number of layers since it only has inputs(no neurons). There can be more than 1 hidden layers. In this case there are 2 hidden layers and an output layer.