After laying down some of the groundwork, it's time for the good stuff. After all, it's fairly common knowledge among the educated fanbase that the sacrifice is a poor play, but it's harder to establish just how poor, and whether it's getting worse. Perhaps the only thing more frustrating than watching a decent hitter pull the bat down is to see that hitter, rusty through years of swinging away, pop the ball straight up into the air or straight to the pitcher. The traditional fan in turn bemoans a lack of fundamentals. Either way, the rally is killed.
So how do we measure the value of a bunt? There are two methods, neither perfect. The first is to consider the run expectancy of both game states, using a statistic called RE24. What this does is averages the number of runs possibly scored in the inning by the chance of that outcome. That way we come out with a result where, if you were to reenact the same game situation over and over, RE24 would tell you the mean runs scored per iteration.
This is something that many fans do consciously or unconsciously in football, especially during a fourth-quarter drill. We watch the other team march up the field and near that 35-yard line, and calculate: "If they had to kick from here, their chances of making it are 70%. But then there's a sack! Now it's down to 55%."
So what is the average change in run expectancy after a sacrifice bunt?
They're bad. Only one season, and only for position players, has the sac bunt not damaged the team's ability to maximize runs scored (on average). But this doesn't tell the whole story; it only gives us the average runs scored. We can do better, thanks to Tom Tango, who published run expectancy for each given number based on the game state and the run environment (average runs scored per game). In 2013, teams are averaging 4.19 runs. Here are the RE24 calculations per prospective run scored:
You're (hopefully) not bunting to score six runs. But let's say it's the bottom of the ninth and the game is tied. You have a runner on first and no outs. Sacrificing that runner lowers the average run expectancy from 0.84 to 0.651. But your chance of scoring one run goes up from 17.6% to 22.9%. (On average. Obviously, you're not guaranteed that 22.9%; that's just how it plays out over time. See the game theory section.) Moving a runner from second to third at the cost of an out increases the odds of scoring a single run from 35.7% to 48.7%. That's an even bigger improvement, and it makes sense why the second-to-third sacrifice is growing in popularity.
But remember, only in the bottom of the ninth does scoring one run guarantee a win. In all other innings, a manager can never be certain exactly how many runs they'll need to win. As a cautionary tale, think of Jack Brohamer of the 1977 Chicago White Sox. With his team up 6-0 in the bottom of the seventh, Brohamer laid down a sacrifice looking for an insurance run. They got it. The Rangers came back and won, 9-8.
RE24 tells us how the sacrifice performs in terms of runs scored. In order to know how it performs in terms of wins, we turn to our old friend, Win Probability Added (WPA).
WPA isn't quite as unkind to the sacrifice, especially for position players, but with rare exceptions it, too, shows the sacrifice bunt to be a net loss. (That's because it ignores all those dumb, low-leverage early-game bunts.) And like RE24, there's a definite downward trend. Keep in mind, however, that we shouldn't be looking at the play without context: a 0 WPA isn't exactly a great thing, but it's not as terrible as it sounds. It means that the team is no better or worse off than it was before; essentially, it's a pass. If the team's worst hitter is stepping to the plate and afterward the team is no worse off, they're probably pretty happy with that result.
It's graphs like these that can make a fan tired of the sacrifice. Basically, the average bunt is worth -0.01 to -0.02 WPA even if it's successful. But as Tango, Lichtman, and Dolphin point out in The Book, just because managers treat sacrifices as automatic doesn't mean that we should. Giving up an out for a base is a pretty terrible trade, but that's not the trade that the batter has to make. As it so happens, the sacrifice bunt can look a lot like a bunt for a base hit, and base hits are good.
In fact, there are ten possible results stemming from a sacrifice bunt:
- fielder's choice, all runners safe
- error, all runners safe
- sacrifice (runners advance, batter out)
- failed sacrifice (runners do not advance, batter out)
- strikeout (runners do not advance, batter out)
- force (lead runner out, batter safe)
- ground into double play
- ground into triple play (don't laugh, it's happened)
The first four of these outcomes I've dubbed "successful" attempts: all runners are safe, team is more likely to win. Add in the sacrifice, and we have "acceptable" attempts, basically what the manager was going for. Any of the final five results are failures. Note that this categorization is based on result, and not intent. We are not penalizing the batter for errors, any more than we reward him for an errant throw. All we care about is the resulting game state. Especially with errors, this can feel counter-intuitive; batting average has taught us to treat errors like players. From a team standpoint, however, they're the natural consequence for putting the ball in play and forcing the defense to do their part. Anyone who recalls an Ichiro groundout from 2001-2 will know how much pressure can be put on a fielder on even the most routine of plays.
A look at "acceptable" bunts should tell us whether batters are putting the ball in play the way they're supposed to, regardless of how it helps the team. The numbers are surprising and not-surprising:
We see the same random peaks in the early eighties and the early aughts. But what's noticeable is that all batters, no matter what position, are getting desired results less often than they used to. In order to figure out what's happening, we need to break things down even further, into each result:
It's hard to ignore the bizarre blip that takes place between 2000 and 2002, where all results were pretty much in line with the usual rates, except that suddenly fielders stopped trying to get the lead runner. I have no idea why this would be. I've combed through the B-R Play Index data, and I just don't see it. But at the same time, I have a hard time believing what I do see. So for now, take that section with a grain of salt.
The big takeaway between the three graphs is that both hits and fielder's choices have dwindled for pitchers, while for position players the rate of bunt singles has been rising year after year. On the negative side of things, pitchers strike out dramatically more than position players, but there's a good reason for this: pitchers have less incentive to swing away when they reach a two strike count. Pitchers also ground into three times as many double plays as position players.
But lest we lay this all at the feet of pitchers, note that position players are declining as well. They don't strike out more, but that doesn't mean they're not reaching two-strike counts. What they are doing is producing more failed and force-out sacrifices than they used to, from around 10% in the seventies to closer to 18% currently. What's causing this drop in productivity will require another section.
There's more to say here, including a breakdown of different positions as well as looking at the Mariners as a team, but I'm running on the fumes of my fumes. So to the bullet points:
Next article, we'll examine different positions and how they've held up over time. Are first baseman too slow to bunt well? Also, we'll look at the Mariners' performance as a team, comparing their results to the American League. Eventually, we'll also stop talking about averages and start talking particulars, like Kyle Seager's bunt against the shift last week. That was enjoyable.