In this week’s episode, we take on a question that could lead to an existential crisis for this very podcast: How much of a difference do analytics actually make? We’d better hope the answer is that analytics make a big difference. The emergence of sports analytics has spawned quite an industry already, and it promises a great deal of growth still to come. We don’t want to be overly dramatic, but there are jobs on the line. Nevertheless, we have a duty on this podcast to take on the hard questions. Analytics naysayers and doubters abound. Do they know something that true believers have failed to see, either willfully or otherwise? Could Charles Barkley be right? Are analytics simply a baseless conceit created by a bunch of nerds so they could get jobs in sports and, you know, dates? Analytics expert and author Ben Alamar returns to help seek out proof that analytics do lead to improved results.
Get Research Updates From MIT SMR
A weekly roundup of everything we’ve published, plus a curated reading list from our editors of the best management content released that week.
Please enter a valid email address
Thank you for signing up
Paul Michelman: As far as sacred texts for the sports analytics movement go, you can’t do much better than Moneyball, Michael Lewis’s tale of the 2002 Oakland Athletics’ miraculous run to the playoffs. With Billy Beane at the helm, the A’s relied on sabermetric strategy at all levels of the organization, from drafting to scouting to game-day decision-making, and the result: The team overcame a major financial disadvantage to win 103 games. The 0-2 season was certainly not an outlier in Oakland. Since 2000, the A’s have been to the playoffs nine times — all under Beane’s watch — and never with a payroll ranking in the top half of the league.
Ben Shields: But there’s a reason that the most famous line from that famous book has nothing to do with Oakland’s improbable success. Despite those sparkling 103 wins in 2002, the A’s fell to the Minnesota Twins in a winner-take-all American League Division Series game five. And if that rings a bell, it might be because the same season-ending scenario had befallen Oakland the year before — and the year before that too. When Beane uttered those iconic words, “My ‘stuff’ doesn’t work in the playoffs,” it wasn’t just a summary of the limits of his analytics-based approach, it was a premonition. Five more times since that memorable Moneyball team bowed out of the post-season early, have the A’s lost an early round do-or-die playoff game, with just a solitary AL Championship Series appearance in 2006.
Paul Michelman: For almost two decades, Moneyball has been the gift that keeps on giving in the world of baseball. Its core principles now widely inform strategy throughout both leagues, but it still hasn’t brought a title to its earliest adopters, who continue to labor away in their quest for ultimate baseball glory, in the soulless, dual-use, concrete box of a half-filled stadium in the East Bay. I’m Paul Michelman.
Ben Shields: I’m Ben Shields. And this is Counterpoints, the sports analytics podcast from MIT Sloan Management Review. In this episode, we take on a topic that could well lead to an existential crisis for this very podcast. How much of a difference do analytics actually make?
Paul Michelman: So Ben, we better hope the answer to the question on the table is that analytics make a big difference, right? The emergence of sports analytics has spawned quite an industry already, and it promises a great deal of growth still to come. I don’t want to be overly dramatic, but there are jobs on the line, man. Nevertheless, we have a duty to take on the hard questions. Analytics naysayers and doubters abound. Do they know something that the true believers have failed to see, either willfully or otherwise? Could Charles Barkley be right? Are analytics simply a baseless conceit created by a bunch of nerds, so they could get a job in sports, and you know, dates?
Ben Shields: Okay, so this is a big question. So, we’re bringing back the big mind of Ben Alamar to help us wrestle it to the ground. Ben Alamar, as some of our listeners may remember, is the author of a book called Sports Analytics, so we can’t think of a better person to have on our show today to discuss this question. Ben, thanks so much for coming back.
Ben Alamar: Thanks for having me back.
Paul Michelman: So, we’re going to look at this question across three dimensions. The first is: How much of a difference do analytics make in our understanding of what is happening in the game and the language we use to talk about what’s happening in the game? The second is: How much of a difference analytics have made to individual and team performance within the game? And the third, is: How much of a difference analytics have actually connected to teams winning at the ultimate level?
Ben Shields: All right, Paul. I want to kind of set a little bit of a foundation here for our conversation, because you could argue that analytics is the “best” buzzword of all time. And so just to put a little bit of definition around it, I want to call upon MIT Sloan’s very own Professor Dimitris Bertsimas who offers, I think, a very clear and helpful definition of analytics, which is “The science of using data to build models that lead to better decisions that add value to individuals, to companies, to institutions.” Now I like this definition for a few reasons. Number one, it separates data from models, right? Data and analytics are two different concepts. Data represent the numbers, and then you build and run models to understand and interpret the data. Second reason why I like this definition is it focuses on the decisions. In the end, analytics should help you make better decisions, and whizbang methodologies and cutting-edge technologies are all well and good, but do they help you make a better decision? That’s the true measure of effective analytics, and there I think it also speaks to the broad appeal of analytics to individuals, companies, institutions and, of course in our case, sports. I also want to put a finer point on the definition of analytics. We could think of analytics in descriptive format, which might mean what happened in the past and why. We could think of analytics in a predictive sense, using models to forecast the future. Or we could think of analytics in their most advanced state as prescriptive or providing guidance by evaluating possible scenarios.
Paul Michelman: So, let’s begin. Guys, what are some examples of analytics contributing in material ways to our understanding [of] what is happening?
Ben Alamar: Well, I think that the easiest one to point to right away is three-point shooting in the NBA. You can see this — the concept of an efficient shot and the value of a shot has taken over. And that is a basic concept from basketball analytics that talks about, basically, that three is more than two. And so, as long as the shooting percentages aren’t too different, that you want to take threes and not twos. And so you can see the amazing growth of the number of threes that people have taken over the last 10 years in the NBA. It’s just skyrocketed. I mean most players now take more threes than whole teams did 15 years ago. And that’s just from a better understanding of what efficient basketball is and a change in behavior that has come directly from some calculations that were done.
Ben Shields: And what’s interesting about that example, Ben, too, is that was enabled in part by innovations in data capture, correct? The sport view cameras, for instance, providing the NBA with a whole new data set upon which the three-point shooting revolution was built. Correct?
Ben Alamar: Well, to an extent, yes. And that sort of aided and accelerated, but it actually began before the motion-capture data existed, because with play-by-play data, we can see what your odds are making a three-point shot, particularly a corner three versus, you know, a 16-foot, two-point shot. And you just do a basic calculation there, and you can see that the three is worth more than the two. What the motion-capture data has allowed us to do is get much finer estimates of the probability you’re going to make a shot, based on the complete context of what’s going on in the play, where the defender is and everything like that, and the type of shot you’re taking. But just the basic calculation of, if you shoot 35% from the corner, it’s a much better shot than a two-point shot that’s maybe three feet closer, but you still shoot, you know, maybe 37% on.
Paul Michelman: Is this legitimately new knowledge, or just new terminology to describe things we’ve always been talking about?
Ben Alamar: Well, it’s not necessarily new knowledge. I mean if you asked anybody who thought about it prior, that, yes, three is more than two. That’s not groundbreaking. What is groundbreaking though is framing basketball and scoring in efficiency terms. So, efficiency is a terminology that just wasn’t used 15 years ago, and now it’s the standard for how we think about things. So, we don’t think about points over a total game. We think about points on a per-possession basis. You know, take an Allen Iverson or Kobe Bryant who’s trying to score 40 points per game. He doesn’t care if he takes 80 shots to do it, but now we care very much that if you’re scoring 40 points, you’d better be doing it on like 40 shots or fewer, because otherwise you’re not being a very efficient member of the basketball team.
Paul Michelman: So how much of this is: “We identify something, we want to measure it, and then we figure out how to measure it”? And how much of this is: “Ooh, look what this technology can measure, let’s measure it”?
Ben Alamar: I think that particularly at the professional level, every inch of competitive advantage you can get is worth it. So, what the technology really is — and I think when it’s used best, it is what you say, the: “How do we measure this? How do we start?” When you start with questions like, “I want to understand this team’s defense better.” OK, that’s a question we had answers to in the past, and we had ways to try and answer that. Now, we have a much deeper way of answering that kind of question with the data that we have, and so the technology enables much better analysis and deeper questions. You know, there are certainly some things that people like to do just because they think they’re fun and cool, but for the most part, I think that teams are really focused on how this is going to impact decisions; how this is actually going to drive more wins.
Ben Shields: I just want to say, that’s a really interesting point. And you know, one of my favorite panels from this past year’s Sports Analytics Conference was about creating a new stat, and you had Voros McCracken up there of the defense-independent pitching statistics fame, who says that some of the best analytical work that he’s ever done is sitting on his couch staring at the ceiling. And I think that speaks to this notion that you have to start with a clearly articulated problem. That’s where human comes in. A human comes in with a clearly articulated problem to which you can apply data and technology to help solve that problem.
Ben Alamar: And I would add to that, that very often humans come in with poorly articulated problems, and then you start talking about them and refine them until they become a well-articulated problem that you can then translate into a statistical problem.
Paul Michelman: So before we move out of this bucket into the second bucket, can you give me an example outside of basketball? Can you give me a second big case?
Ben Shields: I have one, and there’s no question that the basketball analytics revolution has been fun to behold, but there is another revolution happening and it’s with the sport of curling.
Paul Michelman: You’ve been waiting a long time for this…
Ben Shields: I have. And to be clear, I am not the biggest curling aficionado, but as someone who loves analytics, I’m becoming more and more interested in the sport. And there is this guy by the name of Gerry Geurts who founded the website CurlingZone. And based on his sheer love and passion for the sport of curling, he has built a database of curling results from over 175,000 contests. And what did he do with that data? Well, he started analyzing it to see what types of “trends” were different from conventional wisdom in the sport, and he says, “It was a long-held belief that being down one with the hammer in the last end was the preferred position to the opposite. But with the numbers, we were able to show that those teams were only winning about 40% of the time.” Now, Gerry has been entrepreneurial. He’s also started to work with curling teams. So, for those of you that followed the Winter Olympics in 2018, you may know that he, for instance, advised the gold medal champion in both women’s curling as well as in men’s curling. The women’s gold medal winners were from Sweden, and they started working with Gerry about two months prior to the Winter Olympics, and at the time, they were ranked 22nd in the world. But as they gathered more data on the tendencies and styles of their opponents, they were able to alter their own style to the point where when they played South Korea in the final, they basically knew all the tendencies of their opponents based on their previous playing behavior, [so] that they were able to mount a very aggressive campaign against them and of course win the gold medal. Now, the strategies that the Swedish team and that the U.S. men’s team use in the future, if they do the same thing, may not be similarly as successful, but that just means that there’s going to be a new set of questions, a new set of problems that need to be answered to inform the next iteration of curling strategies that win.
Paul Michelman: So let’s move to the second question. Let’s look at how this new knowledge we have gained — these new explanations for why things are happening — how have they demonstrated material changes in performance?
Ben Alamar: Yeah, so I think that the point here in what we’re discussing is about: If all of these computers in this analyses that [we] do and brainpower we throw at this stuff — if it’s not having an impact at some level, then why are we bothering? And one example we can clearly see where this came into play was the U.S. Olympic Committee, which has a really difficult problem. They have to invest in athletes, and their goal is to win as many medals as they possibly can, because that’s what generates more money for the U.S. Olympic teams. And so they have to invest in athletes four, six, seven years before they could possibly be on a medal standard and win a medal in an Olympics. And they have to start making that resource allocation decision really early on. And so that’s a problem that is sort of built for analytics.
So, if you’re looking at sprinters, and you start looking at times, and you say, “All right, this is a 16-, 17-year-old sprinter, and this is how they’re running 100.” And so we can watch their development over time and watch how their time is dropping, and we can predict what Olympics Games they’re going to make it to and what the winning medal time is going to be in those Olympics. And then we can see if they’re on track to be on the podium when they’re going to be in the Olympics in six years. And you can make a decision at that point. You can say: “All right, if they are great, keep doing what you’re doing.” If they’re not on track, then they have to change their program, or they’re not going to get the resources they need that they’ve been used to having. Because the Olympic Committee cannot afford to invest in people they don’t think have a reasonable chance of being on the podium. So that causes coaches to change behavior and causes coaches to get athletes back on a different track to maximize their potential. And you can see from the London games, how that panned out for the U.S. track and field team significantly overperforming where people expected. A lot of that came from good decision-making very early on — on where and which athletes we were going to be supporting.
Paul Michelman: So Ben, let me jump in, because I think this brings to the fore kind of one of the common objections, right? You can tell a player to shoot more threes or to take a better swing launch angle, and that’s all well and good, but it doesn’t explicitly mean they’ll be able to do it. And if you look at two of the big dynasties of the 21st century, the Spurs and the Patriots, both teams are famous for finding undiscovered sources of talent or taking another team’s castoffs and turning them into gold. But is that because Popovich and Belichick are noticing some analytical trend that they can unleash when they bring new talent in to release some potential that others couldn’t? Or are they just visionaries at player development and assessment?
Ben Alamar: So, there’s obviously both things going on. Both teams were early in the analytics revolutions in their sports. So, the Spurs were very early on in talking about analytics, and they were one of the first teams to start increasing their three-point attempts. What’s interesting about the Spurs though, now they take the most long twos in the game. It’s as a reaction to a change in strategy, reacting to the current strategy, and trying to take advantage of the defenses overplaying the three-point line. Whereas the Patriots were, very early on in the NFL, one of the early teams to have a real research department, really take a look at the numbers and really think about it, and so they are identifying particular roles — players who do the things that they are particularly interested in, and they’re putting those players in the right places. You know, Belichick’s genius is beyond just following analytics. He’s not just blindly doing that, but he’s using that to inform those kinds of decisions all the time.
Paul Michelman: I think we’re in general agreement. My cynicism is at a pretty mild level, but can we legitimately and fully make the case that those changes in in-game performance are actually resulting in the ultimate goal which is winning at the highest level?
Ben Alamar: Yeah, so that’s a great question, but I think I would reframe it a little bit. The general concept is right. We say that analytics provides a competitive advantage — so if it does, then it should lead to more wins. Absolutely true. Where we are now in analytics, though, is at a point where everybody does analytics at some level, so we can’t just say this team is analytics and this isn’t, so they’re going to win more. So, that’s not really a fair way to look at it. Everybody has that competitive advantage, so the level that you’re competing against constantly rises. And so, it’s really the teams that are getting an extra competitive advantage out of it [that] are the ones who are ahead of the game and being innovative with how they use the analytics. However, there are some teams that we can recognize [that] don’t use analytics well or in some cases at all.
When we look at their performance, we can see that not using it, at least to the average level in their leagues, is devastating. When was the last time a major team in Major League Baseball that wasn’t a serious analytics team won anything of consequence? In basketball, if you’re not working with that spatial data, if you’re not drawing insights from it, you’re losing. Teams don’t make the playoffs without those kinds of things anymore. So it’s not necessarily: Does analytics create more wins? It is: Does not having analytics create more losses? And there I think that the case is really clear that it does.
Ben Shields: Right. And I would build on that and suggest that even in sports like the NFL, where you could argue that despite teams like the Patriots, many coaches have seen analytics as that really detestable concept. Now, NFL teams are starting to see some success by at least looking at their play calling more analytically. So Ben, you’re probably familiar with this, but there’s been a great line of research driven a lot by football outsiders around play-action passing. And, Ben Baldwin has been writing a little bit about this, and he found that from 2011 to 2017, 196 of 224 team seasons had higher yards per play on play-action drop backs than on non-play-action drop backs. Point is, when you do a play-action pass, you’re going to get more yards [than] in a non-play-action pass. And what we’re seeing at least initially from some of this really interesting research out of football outsiders, is that having a great running game isn’t necessary to setting up an effective play-action game. Net net, when you look at the teams that are running some of the most play-action passes, not only [this] season but also in history, who are they? The Los Angeles Rams and the Kansas City Chiefs, two of the most prolific offenses that we’ve seen this season. So, it’s still, of course, too early to tell whether either of those teams will win the Super Bowl, but they’re certainly having good regular seasons. And I would argue that they are exploiting an advantage that they’re seeing in the numbers and doing so with very effective execution.
Paul Michelman: So, Ben and Ben, is there a risk that those of us who do believe in the power of sports analytics and especially those who are in the profession, credit even these tangible examples of analytics driving change and performance, do they give those things too much credit in the overall picture of success? And so let me pull an example. Comes right from the introduction to the show. Let’s look at those Moneyball A’s of the early 2000s. I don’t think there is much doubt among anyone who has read the book, seen the movie, that the changes that Billy Beane and company brought into the organization were clearly, absolutely, seen in the offensive performance of the Oakland A’s. But there are three players who are rarely mentioned in the conversation about the A’s success. And so let me ask you guys a question. On the one hand, you have those players who were either identified by analytical insights or whose performance improved and probably some combination of both. And there are three great examples: Chad Bradford, Scott Hatteburg, David Justice. Right? All featured in the book and the movie prominently. Would you rather have those three guys on your team? Or Tim Hudson, Barry Zito, and Mark Mulder — the big three young arms that powered a remarkable A’s pitching staff — who are never discussed as a part of the Moneyball phenomenon. Who was actually more responsible for their success?
Ben Alamar: Well, I mean that’s an empirical question, and we can actually go back and take a look at that kind of thing, using a variety of analytical tools to see which really did have more impact. In general, star pitchers have about the same impact as star batters over the course of a season. And so the true Moneyball guys maybe don’t quite measure up to a true star pitcher, but what they do do is enable those pitchers to remain on the team, because they keep the overall salary low — because the Moneyball strategy was to buy low — and so that created enough salary space to keep those pitchers around [when] they might have, in another era, had to move off, because they weren’t going to be able to pay them.
Ben Shields: Yeah, and Paul, first of all, I apologize because I think you’re in a tough spot here. Two against one, two Bens against one, but remember that the Moneyball mentality was: Let’s understand which attributes contribute most to winning and then how the Major League Baseball market values those attributes.
Paul Michelman: How often do teams look backwards after the fact and assess? Do teams kind of do these detailed after-action reviews?
Ben Shields: I think that there’s some of that that goes on. I don’t know how deep any of it particularly is. I mean, I’m sure there’s some folks that go very deep into it. Where that runs into real trouble is [when you’re] trying to disentangle methods and things from, you know, talent and athletes and effort. And actually ascribing a value and splitting up the credit to these different things can be challenging, and particularly when a team didn’t do well and the staff is changing over, then there’s going to be no real analysis of what was done before in terms of strategy work. They’re just going to bring in their new ideas and have at it.
Paul Michelman: I think that speaks to the randomness aspect as well, because for all of the planning that analytical-driven thinking prompts us to do — and that’s a big piece of it, right, to be more planful, to be more strategic about managing performance — we can’t account for randomness. And what happens, I think (and I think I’m arguing the opposite side now) is what stands out are the times when the strategy didn’t work. Right? So a couple of easy examples. If you look at the Rockets, the Houston Rockets’ game seven performance against the Warriors last year, right? This was a team as built on analytics as we’ve discussed as any in history. They were built around making threes, and what did they do in game seven? They missed 27 straight three-point shots, and they blew the game. And there was nothing that analytical planning could have done to save that. Were they so built on one theory that they weren’t able to shift when that theory wasn’t working?
Ben Alamar: Well, I think, certainly missing the threes was a huge problem. They also were missing their second-best player of the season. So, they missed all those threes, and they still had a game of it. They still had a chance. So the strategy, if they had been at full strength, there’s a very reasonable chance that it still would have worked. But the point is true; if you’re so rigid in your strategy, that by itself is not a value of analytics to be rigid. The sport is dynamic, and it changes, and you have to be able to adapt, and you hopefully have the personnel to be able to adapt to it, but sometimes you don’t, and you can blame analytics, you can blame planning, you can [blame] whatever it is. You’re not always as adaptable as you’d like to be, but certainly there’s no strategy, particularly in basketball, that is a hard-and-fast analytical rule that you have to play in a certain way, that somebody should be following.
Ben Shields: And that’s really what this whole conversation is about, which is: Teams fundamentally, as long as sports have been played, are looking for a competitive advantage. And this is about using all sources of information to find that competitive advantage. And what’s going to be so fascinating about this field is yes, today in the NBA and the NFL, Major League Baseball, across different sports — there are styles that are popular. But, those styles will inevitably give way to the next great style. And chances are that style is at least going to be informed by some sort of analytical work. It’s all driven by this constant need to find that competitive advantage. And I can’t blame teams for searching out all different types of information to gain that competitive advantage.
Paul Michelman: All right, so let’s assess how we’ve done so far. The first part of the question, again, was about giving new language and understanding. I think that’s a slam dunk. It’s clear analytics has done that. Second was, is it driving changes in individual performance? Yes. I think it’s clear that it has done that. I don’t think analytics is the sole reason why players are able to improve their performance or why teams are able to improve their talent assessment, but it’s clearly a major contributor. And you guys punted on question three. Ben Alamar rephrased it, and you were convincing on your rephrase, but I think it’s interesting that we can’t emphatically answer the question about winning. That still leaves me with this hint of doubt about whether analytics work. I think if I have a concern in all of this, it’s that those who believe in analytics the most, I still have this fear that they believe in it too much. And when you do that, whether it’s in the context of sports or in any context, if you believe that there was one part of your strategy that has an outsize impact on the outcome, you tend to overinvest in that, and you tend to stop paying attention to the other elements that might be contributing to your success. I think there’s a small risk of that happening here.
Ben Shields: I would say that the challenge for leaders in the sports business, specifically general managers and even to a greater degree coaches, is to be able to synthesize all the information in order to make effective decisions, and that information could be very advanced analytics. It can also be human behavior. That’s one thing that is an unmistakable component of sports performance, the human element, and that’s why I think some of these jobs are really challenging. Paul, I don’t have a great answer to the question that you’re posing. The great leaders in this business are able to synthesize the data analytics and pair it and calibrate it with the human element to effectively guide a team to a winning record. That’s the challenge.
Ben Alamar: And I would just add to that, that I think the point that you can overdo something is absolutely not unique to analytics. It is a fundamental truth of any new idea, new concept, anything that you’re bringing to the part of your strategy. And so the hard thing I think for analysts sometimes to do is to step back and really recognize what the limits of the models are and be honest with themselves and their organization. There’s a lot of fighting to use analytics from an analyst perspective, and that’s good. That’s their role. But they also need to make sure that they’re not overstating their results in order to make a point or giving too much credit to their analysis when things go well, just because it happened to work and they need to get more value placed in it, because in most organizations analytics are more in danger of being undervalued than overvalued.
Ben Shields: Yeah. I think that’s an interesting point. And I would add that two of the dominant narratives amongst sports teams right now, I think, are the degree to which you use analytics and then the culture of the organization. I think it’s going to be fascinating to see how both of those narratives continue. Right? And to the degree to which they intersect as well.
Paul Michelman: And just to kind of punctuate that point, Ben, because you brought it up earlier and again here, and I think that’s really critical if the playing field of analytical sophistication is beginning to flatten. In other words, if everyone is beginning to follow some of the same theories, it’s the people behind it and how they choose to make use of the insights they’re developing that actually are going to be the differentiator going forward.
Okay. So we’re going to wrap this up. If each of you individually, at different times, finds yourself in the elevator with one Charles Barkley for a 30-second ride. What’s your pitch?
Ben Alamar: Wow, If I’m in the elevator with Charles, I would hesitate to use the word analytics at all. But I think that just the fundamental point of just pointing to teams like the Rockets, like the Warriors, and saying, “Look, there’s a difference here. This is a different kind of basketball, maybe not the kind of basketball you like to watch, but it’s winning basketball, and their success can’t be ignored. And analytics is part of it. It’s not the whole thing. And nobody, no analyst thinks it’s the whole thing. But it’s part of what’s happened, it’s part of what’s changed the NBA, it’s part of how these teams win games.”
Ben Shields: My pitch to Charles, if I were in an elevator with him for 30 seconds, is: “What if I told you I had a way for you to beat Michael Jordan and maybe win a championship in your career? I’m not sure how well he would respond to that, but what the heck, might as well ask the question.”
Paul Michelman: I’d tell him I think he was awesome, I’d ask him for his autograph, and for a restaurant recommendation.
Ben Alamar: I think you’ve got the best plan, Paul.
Ben Shields: Yes. That will be the most persuasive of all the pitches.
Paul Michelman: Ben Alamar, thanks so much for lending us your insights.
Ben Alamar: Thanks for having me again.
Ben Shields: Thanks Ben.
Paul Michelman: This has been Counterpoints, the sports analytics podcast from MIT Sloan Management Review.
Ben Shields: You can find us on iTunes, Google Play, and wherever fine podcasts are streamed. If you enjoy Counterpoints, please take a moment to rate and review the program, and we’ll graciously accept your constructive criticism too.
Paul Michelman: Counterpoints is produced by Mary Dooe. Our theme music was composed by Matt Reed. Our coordinating producer is Mackenzie Wise. Our crack researcher is Jake Manashi, and our maven of marketing is Desiree Barry.