- Background
- What is Overfitting?
- Lasso (L1) and Ridge (L2) Regularisation
- Early Stopping
- Dropout
- Other Methods
- Summary
So far in this neural networks 101 series we have discussed two ways to improve the performance of neural networks: hyperparameter tuning and faster gradient descent optimisers. You can check those posts below:
There is one other set of techniques that aid in performance and that is regularisation. This helps prevent the model from overfitting to the training dataset to have more accurate and consistent predictions.
In this article, we will cover a wide range of methods to regularise your neural network and how you can do it in PyTorch!
If you are enjoying this article, make sure to subscribe to my YouTube Channel!
Click on the link for video tutorials that teach you core data science concepts in a digestible manner!
Let’s quickly recap of what we mean by overfitting in machine learning and statistics.