The techniques of big data have been criticized recently in a post by Tim Harford among others. It’s quite easy to imagine the potential here has been exaggerated, and a core point about the importance of theory in a lot of empiricism is undeniable. But I think there is some overreach in the criticisms, and in particular one of the failure examples Tim gives shows the opposite of what he claims. He points to a flu forecast by google that failed:
« Four years after the original Nature paper was published, Nature News had sad tidings to convey: the latest flu outbreak had claimed an unexpected victim: Google Flu Trends. After reliably providing a swift and accurate account of flu outbreaks for several winters, the theory-free, data-rich model had lost its nose for where flu was going. Google’s model pointed to a severe outbreak but when the slow-and-steady data from the CDC arrived, they showed that Google’s estimates of the spread of flu-like illnesses were overstated by almost a factor of two. »
What happened here is in fact one of the advantages of big data: the ability to reject models due to frequently updating data. Google provided a forecast and the forecast and the model underlying it was unequivocably debunked. What a novelty for the social sciences! Of course big data isn’t alone in having constantly updating forecasts that can be tested against reality, but the focus on forecasting and constantly updating data is one of the advantages it has over many other areas of the social sciences. Yes, google made a model and it was falsified. But keep in mind the famous claim John Ioniaddes that most published research findings are false. Maybe it is most, or maybe it’s only half or a quarter. But with big data if a large percent of your findings are false you find out sooner rather than later.
Indeed it is the forecast driven nature of machine learning that makes it so appealing as a branch of empiricism. Maybe this is something you can’t appreciate if you haven’t actually spent time with a big dataset with lots of variables and seen how potentially easy it is to pick the wrong model, or if you haven’t taken a look at the statistical guts of disagreeing research papers. But this focus on the right model being the one that makes the best out of sample predictions, and the ability to be constantly running out-of-sample predictions just seems so much more tied to reality than most research has to be.
Another thing that I think people miss is there is too much focus on what a really long dataset does to, for example, the probability of spurious correlations. What makes machine learning interesting isn’t just dealing with long dataset, but really wide datasets. Consider the data that Netflix has on it’s customers. This isn’t remarkable because of how many people are in the dataset, but how much you know about them. Yes it’s true what they say about spurious correlations, but if you do a high frequency out-of-sample predictions with a spurious correlation the relationship will be falsified relatively quickly no matter what your p-value.
Buzzword Bingo: Big Data = Collection of large… Buzzword Bingo: Big Data = Collection of large and complex data sets (Photo credit: planeta)
In 2001, the statisticisn Leo Breiman wrote a paper titled “Statistical Modeling: The Two Cultures” where he contrasted the data modeling culture and the algorithmic modeling cultures that compete among statisticians. The data modeling culture used goodness-of-fit tests and residual examination to validate models, while the algorithmic modeling culture focused on out-of-sample predictions. He estimated that 98% of all statisticians fell into the former camp, and only 2% fell into the latter. I’m not sure what the numbers are today, and surely many fall into both camps, but the percent of statisticians who are algorithmic folks and the percent of important problems they are working on has gone up sharply from 2%.
It’s true that misleading p-values are a consequence of a lot of data, but this is a data modeling culture problem, not an algorithmic modeling culture problem.
Now don’t get me wrong, I am for the most part a p-value checking, residual examining, data modeling culture economist. But machine learning and big data are going to get more important not less, and I think social scientists who don’t learn to at least think like the other culture are going to be left behind.
By Adam Ozimek
Source: forbes