You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Very excited to see a deep learning course being developed! Here is some feedback I have and I hope this helps to build the course better.
Explain the reason for the activation function.
e.g. With the initial weight product and the sum of bias, the input value is essentially transformed in a linear way, a straight line like linear regression from machine learning. To capture the complex patterns in our data, we need to add more complexity to the formula so an activation function is needed. There are different activation functions that can be used and each has its pros and cons. ....(List some of them so learners know there's more to learn about how to choose them accordingly/)
Add labels, softmax function and backpropagation concept
e.g. For a supervised learning model, the output layer generates a score for the input, which can be used to indicate the probability of the classification result for instance. The label data are given at the output layer and the softmax function can be used to calculate the probability from the output score, which is then used in the loss function to evaluate the model performance. Similar to how a machine learns we talked about in the previous lesson, a neural network needs to update all its weights and biases according to the loss value too. This is done through backpropagation. It essentially gives the model the output and current parameters (weight and bias) for each layer and calculates the derivative of the formula, which represents the gradient. This process traverses reversely from the output layer to the input layer and finds out which parameters to change that can reduce the loss the most, and the model will do so before the next iteration begins.
The text was updated successfully, but these errors were encountered:
Very excited to see a deep learning course being developed! Here is some feedback I have and I hope this helps to build the course better.
e.g. With the initial weight product and the sum of bias, the input value is essentially transformed in a linear way, a straight line like linear regression from machine learning. To capture the complex patterns in our data, we need to add more complexity to the formula so an activation function is needed. There are different activation functions that can be used and each has its pros and cons. ....(List some of them so learners know there's more to learn about how to choose them accordingly/)
e.g. For a supervised learning model, the output layer generates a score for the input, which can be used to indicate the probability of the classification result for instance. The label data are given at the output layer and the softmax function can be used to calculate the probability from the output score, which is then used in the loss function to evaluate the model performance. Similar to how a machine learns we talked about in the previous lesson, a neural network needs to update all its weights and biases according to the loss value too. This is done through backpropagation. It essentially gives the model the output and current parameters (weight and bias) for each layer and calculates the derivative of the formula, which represents the gradient. This process traverses reversely from the output layer to the input layer and finds out which parameters to change that can reduce the loss the most, and the model will do so before the next iteration begins.
The text was updated successfully, but these errors were encountered: