Given a dataset like:
We want:
Concept | Description |
---|---|
m |
Number of examples in dataset |
i th example in the dataset |
|
ŷ |
Predicted output |
Loss Function 𝓛(ŷ, y)
|
A function to compute the error for a single training example |
Cost Function 𝙹(w, b)
|
The average of the loss functions of the entire training set |
Convex Function | A function that has one local value |
Non-Convex Function | A function that has lots of different local values |
Gradient Descent | An iterative optimization method that we use to converge to the global optimum of Cost Function
|
In other words: The
Cost Function
measures how well our parametersw
andb
are doing on the training set, so the bestw
andb
are the values that minimize𝙹(w, b)
as possible
General Formula:
α
(alpha) is the Learning Rate
It is a positive scalar determining the size of the step of each iteration of gradient descent due to the corresponded estimated error each time the model weights are updated, so, it controls how quickly or slowly a neural network model learns a problem.