Skip to content

Commit 76c01da

Browse files
Update machine-learning/03-Training and AutoML.ipynb
Co-authored-by: Luis Quintanilla <46974588+luisquintanilla@users.noreply.github.com>
1 parent d501fed commit 76c01da

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

machine-learning/03-Training and AutoML.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@
2727
"\n",
2828
"From code aspect, \"train a model\" usually means call `model.fit(X, y)` in most of python machine learning packages, where `X` is feature array and `y` is label, or `model.Fit(X)` in ML.Net, where `X` is an `IDataView` which includes both feature and label.\n",
2929
"\n",
30-
"So what happen after we call `Fit(X)`? To answer that question we firstly need to know what's a good model mean. Generally, a good model means it's predicted result is very close to the actual label. For classification, a good model means it can classify data points correctly, and for regression, a good model means its predicted value is very close to actual value. So we just need to find a way to quantify how closeness, or how difference predicted and actual label is, then we can answer what `Fit(X)` does.\n",
30+
"In ML.NET, you kick off a training job by calling the `Fit` method. So what happen when you call `Fit`? To answer that question you first need to define what a good model means. Generally, a good model means the values it predicts are very close to the actual value. The value to predict is commonly known as the **target** or **label**. For classification, a good model means it can classify data points correctly. For regression, a good model means its predicted numerical value is very close to actual numerical value. So we just need to find a way to quantify how close or how different the predicted and actual values are.\n",
3131
"\n",
3232
"In machinelearning, such difference between predicted and actual label usually called `loss`, which measures distance between those values. There're difference losses for difference tasks, like softmax loss for classifiaction, rmse loss for regression. But in the gist they are all quantify metric to measure the distance between predicted and actual label. In most of cases, a lower `loss` means a better model.\n",
3333
"\n",

0 commit comments

Comments
 (0)