I’m speaking, of course, about the crazy upset of the University of Maryland, Baltimore County (UMBC) Retrievers over the University of Virginia Cavaliers, 75–54 in the NCAA Division I Men’s Basketball Tournament—the first time a number-16 seed has ever beaten a number-1 seed. To make the already-unprecedented craziness even crazier, the University of Virginia was the overall number 1 seed among all 68 teams in the tournament. And this UMBC victory didn’t rely on the typical March Madness highlight scene—the garden-variety, three-quarter-court miracle shot at the buzzer by the 5’6” floppy-haired underdog. No, this was an absolute trouncing of the best team in the field by a comparatively ragtag squad who, during the regular season, had been whipped 83-39 by the likes of the Albany Great Danes. UMBC was in some respects lucky just to be there on the same court with the Cavaliers, having earned an automatic bid by virtue of its winning the America East Conference tournament in a surprising upset over the favored Vermont Catamounts. Without exaggeration, UMBC’s smack-down of Virginia ranks as one of the biggest upsets in all of basketball history.
Few –but some—had predicted the shocking result. ESPN hosts an annual tournament challenge, in which millions of people submit their own predictions for the tournament with a chance to win $10,000, awarded to the best bracket. With respect to the historic UMBC upset, ESPN reported that only 3.4% of entrants had penned in the underdog. That the number was as high as 3.4%, however, almost certainly reflects an outlier segment that made the pick either out of UMBC fandom, or silliness, or perhaps a tendency toward foolhardy risk-taking –not out of any true basketball acumen. Nobody could have logically concluded that UMBC was most likely to win that day. After all, the Retrievers simply were not more likely to win that day, by any measure.
However poor the 3.4% prediction success rate for humans was, the number of brackets predicting the upset that were generated by the best-available computer-models was zero. This discrepancy between people and humans raises questions about the inherent unpredictability of sports versus the power of computing, because much has been made about the power of big-data analytics to improve success in predicting events such as the NCAA tournament.
Computer models for the NCAA basketball tournament abound, and some of them (such as this one) claim impressive results. (You can find references to other examples here and here.) These computer models can use strikingly different methods to predict winners and losers. Some use a composite of various team ranking systems to make predictions. Others delve deeper to find correlations between rarely-published stats and matchup outcomes. Still others assume a certain number of upsets, and then sprinkle in against-the-odds victories to make the overall bracket resemble typical tournament results.
Let’s be clear: in general, these computer models tend to fare better than humans in predicting NCAA basketball games. The computer models will in fact often give you an admirable bracket for your office pool or even a good bracket for betting purposes. And that’s because a computer model can take in a lot more data than people can, assessing many important but less-appreciated insights that shed predictive light on a game’s eventual outcome. On top of that, a computer model can process all that information in a much more rigorous way than even experts can.
But the general rule for computer model-based simulations in any such tournament is that it will likely give you a better-than-average-bracket, but not a truly great bracket. When you’re in a much larger pool, such as the yearly ESPN tournament challenge, using a computer model is not going to get you anywhere near the top.
Let’s take a look at this year’s SportLine advanced computer model prediction, for example. Only one of the four NCAA tournament regions is published publicly. But just in that one visible bracket, the computer model –before the tournament began– predicted only 7 of 12 games correctly through the first two rounds. Many brackets submitted to the ESPN tournament challenge, such as this one, were able to do far better. Similarly, last year, the SportsLine advanced computer model didn’t correctly predict the Final Four (again based on only limited data), and correctly predicting the Final Four is a feat of prognostication achieved by many hundreds or thousands of people every year.
Data science doesn’t fail in the ESPN tournament challenge simply because it is an inexact science—which it is. Data science instead fails at this level of prediction because these tournaments always include a measure of against-the-odds randomness that is impossible to foresee in any precise way. Well-designed computer models can excel at determining which team will usually win. But the fact that a good model, by design, will not favor the team that it sees as less likely to win ironically reveals the limitations of such a model. Unlikely victors have always been, and they’re one reason why we bother to watch these games in the first place. Big-data analytics will continue to make the most sensible choice for every game. Computer models will continue to produce a sane bracket. But the tournament isn’t nicknamed March Sanity. Maddeningly unpredictable upsets will always happen. And that madness is what makes tournaments like these so compelling.
To keep up with more emerging trends and technologies, follow Prowess on our blog, Twitter, and LinkedIn.