BigDataFr recommends: Ideas on interpreting machine learning
You’ve probably heard by now that machine learning algorithms can use big data to predict whether a donor will give to a charity, whether an infant in a NICU will develop sepsis, whether a customer will respond to an ad, and on and on. Machine learning can even drive cars and predict elections. … Err, wait. Can it? I believe it can, but these recent high-profile hiccups should leave everyone who works with data (big or not) and machine learning algorithms asking themselves some very hard questions: do I understand my data? Do I understand the model and answers my machine learning algorithm is giving me? And do I trust these answers? Unfortunately, the complexity that bestows the extraordinary predictive abilities on machine learning algorithms also makes the answers the algorithms produce hard to understand, and maybe even hard to trust. […]
Abouth the authors
1Patrick Hall is a senior data scientist and product engineer at H2o.ai. His product work at H2o.ai focuses on two important aspects of applied machine learning, model interpretability and model deployment.
2Wen Phan is a senior solutions architect at H2O.ai. Wen works with customers and organizations to architect systems, smarter applications, and data products to make better decisions, achieve positive outcomes, and transform the way they do business.
3Sri is co-founder and CEO of H2O (@h2oai), the builders of H2O. H2O democratizes big data science and makes Hadoop do math for better predictions.