Skip to content

Commit 531ebcc

Browse files
update
1 parent b3d5803 commit 531ebcc

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

machine-learning/03-Training and AutoML.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -47,9 +47,9 @@
4747
"The process of finding the best configuration for your trainer is known as hyper-parameter optimization (HPO). Like the process of choosing your trainer it involves a lot of trial and error. The built-in Automated ML (AutoML) capabilities in ML.NET simplify the HPO process.\n",
4848
"\n",
4949
"### Overfitting and Underfitting\n",
50-
"Overfitting and underfitting are the two most common problems you encounter when training a model. Underfitting means the selected trainer is not capable enough to fit training dataset and usually result in a high loss during training and low score/metric on test dataset. To resolve this you need to either select a more powerful model or perform more feature engineering. Overfitting is the opposite, which happens when model learns the training data too well. This usually results in low loss metric during training but high loss on test dataset.\n".
51-
52-
A good analogy for these concepts is studying for an exam. Let's say you knew the questions and answers ahead of time. After studying, you take the test and get a perfect score. Great news! However, when you're given the exam again with the questions rearranged and with slightly different wording you get a lower score. That suggests you memorized the answers and didn't actually learn the concepts you were being tested on. This is an example of overfitting. Underfitting is the opposite where the study materials you were given don't accurately represent what you're evaluated on for the exam. As a result, you resort to guessing the answers since you don't have enough knowledge to answer correctly.
50+
"Overfitting and underfitting are the two most common problems you encounter when training a model. Underfitting means the selected trainer is not capable enough to fit training dataset and usually result in a high loss during training and low score/metric on test dataset. To resolve this you need to either select a more powerful model or perform more feature engineering. Overfitting is the opposite, which happens when model learns the training data too well. This usually results in low loss metric during training but high loss on test dataset.\n",
51+
"\n",
52+
"A good analogy for these concepts is studying for an exam. Let's say you knew the questions and answers ahead of time. After studying, you take the test and get a perfect score. Great news! However, when you're given the exam again with the questions rearranged and with slightly different wording you get a lower score. That suggests you memorized the answers and didn't actually learn the concepts you were being tested on. This is an example of overfitting. Underfitting is the opposite where the study materials you were given don't accurately represent what you're evaluated on for the exam. As a result, you resort to guessing the answers since you don't have enough knowledge to answer correctly.\n",
5353
"\n",
5454
"In the next section, we will go through two examples. The first example trains a regression model on a linear dataset using both linear and more advanced non-linear trainers to highlight the importance of selecting the right trainer. The second example trains a regression model on a non-linear dataset using `LightGbm` with different hyper-parameters to show the importance of hyper-parameter optimization"
5555
]

0 commit comments

Comments
 (0)