"Other than difference in trainers, different hyper-parameter in one trainer also have a huge impact over the final training performance, especially for tree-base trainers. This is because the capability of these trainers to fit a specific dataset is largly depends on their hyper parameters. For example, larger `numberOfLeaves` in `LightGbm` results to a larger model and usually enable it to fit on a more complex dataset, but it might have countereffect on small dataset and cause overfitting. On the contrary, if the dataset is complex but you set a small `numberOfLeaves`, it might impair `LightGbm`'s ability on fitting that dataset and cause underfit.\n",
0 commit comments