Definition

A key issue when designing supervised learning algorithms is that they should be able to find a trade off between bias and variance.

Roughly speaking, this refers to the ability of a learning algorithm to flexibly fit other datasets than the one on which it is trained (i.e., avoiding overfitting), while being able to perform effectively at least on the training set (i.e., preventing underfitting).

It is now widely recognized that each algorithm has its own selective superiority, being best for some but not all tasks


References