Ensemble Method

 An ensemble is a machine learning model that combines the predictions from two or more models. The models that contribute to the ensemble, referred to as ensemble members, maybe the same type or different types and may or may not be trained on the same training data. The predictions made by the ensemble members may be combined using statistics, such as the mode or mean, or by more sophisticated methods that learn how much to trust each member and under what conditions.

The study of ensemble methods really picked up in the 1990s, and that decade was when papers on the most popular and widely used methods were published, such as core bagging, boosting, and stacking methods.

Why should we consider using an ensemble?

There are two main reasons to use an ensemble over a single model, and they are related; they are:

  1. Performance: An ensemble can make better predictions and achieve better performance than any single contributing model.
  2. Robustness: An ensemble reduces the spread or dispersion of the predictions and model performance.

Ensembles are used to achieve better predictive performance on a predictive modelling problem than a single predictive model. The way this is achieved can be understood as the model reducing the variance component of the prediction error by adding bias (i.e. in the context of the bias-variance trade-off).





Comments

Popular posts from this blog

Linear Regression with single variable

Introduction Deep Learning

Decision Tree