BigDataFr recommends: Random forests and big data
Abstract
Big Data is one of the major challenges of statistical science and has numerous consequences from algorithmic and theoretical viewpoints. Big Data always involves massive data but it also often includes data streams and data heterogeneity.
Recently some statistical methods have been adapted to process Big Data, like linear regression models, clustering methods and bootstrapping schemes. Based on decision trees combined with aggregation and bootstrap ideas, random forests, introduced by Breiman in 2001, are a powerful nonparametric statistical method allowing to consider in a single and versatile framework regression problems as well as two-class or multi-class classification problems.
This paper reviews available proposals about random forests in parallel environments as well as about online random forests. Then, we formulate various remarks and sketch some alternative directions for random forests in the Big Data context. [..]
Read paper
By Robin Genuer 1, 2 Jean-Michel Poggi 3 Christine Tuleau-Malot 4 Nathalie Villa-Vialaneix 5
Source: hal.archives-ouvertes.fr
1 SISTM – Statistics In System biology and Translational Medicine
Epidémiologie et Biostatistique, INRIA Bordeaux – Sud-Ouest
2 ISPED – Institut de Santé Publique, d’Epidémiologie et de Développement
3 LM-Orsay – Laboratoire de Mathématiques d’Orsay
4 JAD – Laboratoire Jean Alexandre Dieudonné
5 MIAT INRA – Unité de Mathématiques et Informatique Appliquées de Toulouse