Baseball Articles

RSS

Winning Early, Young Starters, and Defense

Tom Ruane and Tom Tippett Share Research

This month, I'm delighted to share this space with Tom Ruane, who recently joined the Diamond Mind team as a researcher, programmer and writer. Tom came to my attention earlier this year when he posted a large number of extremely well-researched and well-written items to the mailing list for the Society for American Baseball Research (SABR). He's one of the best baseball analysts I've come across in a long, long time, and I hope you enjoy reading his material as much as I do.

We're going to cover four mostly unrelated topics in this article. First, Tom will explore how much you can read into teams that get off to a quick start in April. Second, he'll delve into the question of babying the arms of young pitchers. And, finally, I'll present some grass/turf fielding statistics and talk about illusions that can be created by range factors.

The Importance of Winning Early, by Tom Ruane

Near the beginning of the 1998 baseball season, Don Zminda of Stats Inc. wrote a column for America On-Line. It dealt with the importance of the first few games of the season and started with a table correlating a team's mark in its first 10 games with its eventual record. For example, of the 218 teams to start the season 4-6 since 1951, only 38.1% of them managed to avoid a losing record, while 63.3% of the 226 teams starting out 6-4 finished up at .500 or better. He continued:

"Interesting stuff, but John Dewan wanted Don to go out an even more of a limb. He asked to see the same chart based on a club's first two or three games. This one simply knocked me over:

   Teams W-L Record in First 2 or 3 Games 1951-97



                    Cumul Finish     Ended Season

           # of    ==============    ============

    Start   Tms     Record     Pct    .500+   Pct



     0- 2   284  21279-23213  .478     119   41.9%

     1- 1   494  38916-39137  .499     251   50.8%

     2- 0   288  23648-21493  .524     189   65.6%



     0- 3   146  10936-11925  .478      56   38.4%

     1- 2   383  29470-30800  .489     182   47.5%

     2- 1   388  30937-30317  .505     213   54.9%

     3- 0   149  12500-10801  .536     108   72.5%

Would you have guessed that there'd be such a big difference between starting out 0-2 versus going 1-1 or 2-0? But there it is, and it seems to be anything but a fluke. At every number of games we looked at from two to 10, there's the same straight-line progression."

Which led Don to conclude that "early-season games ARE a lot more important than you might think. That old baseball adage - 'A win in April is worth two in September' - is not so crazy."

I thought it was very interesting, but decided to see if the phenomenon was unique to games at the start of the season. So here's what I did: rather than concentrate on the first 3 games of the season, I looked at ALL 3-game stretches during the season. If a win in April is really worth 2 at other times, you'd expect the spread of winning percentages to be more pronounced in games 1-3 than, for example, in games 135-7. But it isn't.

     Teams W-L Record in Games 135-137 1951-96



                   Cumul Finish     Ended Season

          # of    ==============    ============

   Record  Tms     Record     Pct    .500+   Pct

   

    0- 3   143  10556-12246  .463      48   33.6%

    1- 2   344  26666-28265  .485     145   42.2%

    2- 1   354  28918-27679  .511     214   60.5%

    3- 0   143  12450-10400  .545     114   79.7%

NOTE: I've removed ties from the equation. I guess I should call them decisions not games, but you get the idea.

The median percentages of these 4 groups of records, starting with the games 1-3 and going through to games 160-162, is the following:

   Record  Win Pct    .500+ Pct

    0- 3    .469        35.8%

    1- 2    .490        46.8%

    2- 1    .511        58.7%

    3- 0    .530        70.0%

So what their study shows is not that early games are more significant than later ones, but that a lot more bad teams go 0-3, at ANY time during the season, than do good teams. And so on.

By the way, the worst single game to lose if you want to have a winning season? Game 81. Only 43.4% of the teams losing their 81st decision since 1951 were able to avoid a losing season. The best game to lose? It's a tie: all 5 teams that dropped their 163rd, 164th or 165th decision of the season had great records. I wonder why.

The Care and Feeding of Young Arms, by Tom Ruane

With Kerry Wood in the headlines recently, there has been a lot of discussion in the press about how best to protect his (and other young pitcher's) arm. In May, Rob Neyer wrote a "Stats Class" column on ESPN.com about the price pitchers ultimately pay when they rack up a lot of innings at a tender age. He looked at the 19 pitchers who have thrown 200 innings in a season since 1969 and compared their performance in seasons 1-3 with their performance in seasons 4-6. Here's what he found:

                   IP     K/9    ERA

    Years 1-3   11375    5.97   3.31

    Years 4-6    8565    6.24   3.43

     Pct. +/-   -24.7    +4.5   -3.6

He noted the large dropoff in innings pitches, gave a few examples (including Mark Fidrych) and concluded with:

"You've heard a lot about Kerry Wood and Nolan Ryan. . . . But there is one important difference between the two pitchers. Nolan Ryan didn't total 200-plus innings in a major-league season until he was 25 years old. No one ever talks about this, but it goes a long way toward explaining why Ryan was still throwing 90-plus fastballs two decades later."

Well, that got me interested in what a similar chart would look like for pitchers over the same period (1969-1991) who waited until they were 25 to pitch 200 or more innings. (I picked 1991 as the cutoff because it gave me six years of data to examine for today's pitchers.) This list includes 54 pitchers and looks like this:

                    IP    K/9    ERA

     Years 1-3   31240   5.32   3.77

     Years 4-6   22448   5.44   3.81

      Pct. +/-   -28.1   +2.3   -1.1

So the innings pitched totals for the pitchers who saved themselves until their mid-twenties fell off even more than those who were rushed into action. And given that their performance over the second period was almost identical to the first, I'm assuming that their drop in innings pitched was primarily due to arm problems. Of course, it is possible that their teams expected them to get better and dumped them when they failed to improve. You could argue that we're talking about two different classes of pitchers here and you'd be right; pitchers with the ability to break into a starting rotation in their early twenties are as a group a lot more talented than those who come around four or five years later. Still, the differences in their ability should not have affected their susceptibility to arm woes.

By the way, I looked at a sampling of these pitchers, and with some exceptions (like Doyle Alexander), they did not exceed 200 innings pitched in the minors either.

Of course, I'm not recommending that it's okay for Kerry Wood to start throwing 150 pitches a game, or that it is somehow beneficial for a pitcher to top the 200 inning mark early. I guess my point is two-fold:

1) Innings pitched might not be the best measurement here. How many pitches he throws, especially when his arm is tired, is probably a better indicator than yearly innings pitched. Livian Hernandez pitched 96 innings in 1997 (not counting the post-season) and was probably overworked.

2) Pitchers' arms are fragile throughout their twenties. My guess is that the charts above wouldn't have changed that much (except for the number of pitchers involved) if I had picked 23, 27 or 29 as the target age. Pitching strategies are still evolving as we learn more and more about how to protect players from arm problems. It wouldn't surprise me if in a decade or so, most teams have six-man rotations and starting pitchers average from 150-175 innings a year. Of course then you'll have to listen to me complain about how they don't make pitchers like they use to. (Why in my day pitchers were men--tough guys who thought nothing of pitching seven, maybe eight innings on occasion-- especially if they had their good stuff, and the weather wasn't too hot or muggy.)

Grass/Turf Effects in Fielding Statistics

I'm sure by now you've seen your share of situational breakdowns in batting and pitching stats in various magazines, books and web sites. But I'll be you haven't seen too many fielding splits.

I spend a lot of time studying fielding, probably because I'm a little guy who grew up as a good-field, light-hitting shortstop. I have no illusions about being good enough to play professional ball -- I'm definitely not -- but if there was a Designated Fielder position in baseball, I'd be first in line. It wouldn't matter to me if I never came to the plate so long as I could play shortstop every day.

One of the things I've long been curious about is the effect of grass and turf fields on fielding statistics. Do infielders get to more grounders when the infield grass has a chance to slow them down? How much does playing on turf cut down on error rates? Is there any impact on the stats for outfielders?

To find out, I compiled fielding totals by position on a grass/turf basis for the period from 1980-1997. The data originated from Retrosheet (www.retrosheet.org), Project Scoresheet (now defunct), The Baseball Workshop (now part of Total Sports), and Total Sports. Here are the totals:

                Fielding Totals, Grass/Turf, 1980-97



Pos  S  Innings     PO      A    E    DP   PB   FPct  PO/9   A/9  DP/9  PB/9

---- -  ------- ------ ------ ---- ----- ----  ----- ----- ----- ----- -----



 p   t   244805  15818  33919 2365  2246        .955  .582 1.247  .083

 p   g   419065  26917  58977 3940  4325        .956  .578 1.267  .093



 c   t   244805 164211  13642 2198  1682 2158   .988 6.037  .502  .062  .079

 c   g   419065 279432  24399 3585  3206 3796   .988 6.001  .524  .069  .082



1b   t   244805 247210  19629 1994 21838        .993 9.088  .722  .803

1b   g   419065 418629  33284 3681 39900        .992 8.991  .715  .857



2b   t   244805  58706  83712 2503 17445        .983 2.158 3.078  .641

2b   g   419065 101227 141417 4939 31429        .980 2.174 3.037  .675



3b   t   244805  20266  55217 3859  4592        .951  .745 2.030  .169

3b   g   419065  35137  92969 6994  8468        .948  .755 1.997  .182



ss   t   244805  44333  83451 4000 16365        .970 1.630 3.068  .602

ss   g   419065  76695 143548 7658 29614        .966 1.647 3.083  .636



lf   t   244805  55022   1879 1206   300        .979 2.023  .069  .011

lf   g   419065  97607   3215 2311   526        .978 2.096  .069  .011



cf   t   244805  72902   1611  972   386        .987 2.680  .059  .014

cf   g   419065 125194   2507 2000   666        .985 2.689  .054  .014



rf   t   244805  55953   2228 1182   445        .980 2.057  .082  .016

rf   g   419065  96373   3632 2330   769        .977 2.070  .078  .017

As you might expect, fielding percentages among infielders are a little higher on turf than on grass.

The rates of putouts and assists per nine defensive innings are almost identical, which probably says that (a) balls that are slowed down a little by grass are about equally offset by the truer hops you get on turf, (b) because there are exactly 3 outs per inning on both grass and turf, it would be hard for these rates to deviate by much, and (c) one can safely ignore the effects of grass and turf when evaluating players based on range factors or similar stats.

The DP rate is noticeably higher on grass for all infield positions. It's been said that players run faster on turf (based on noticeable increases in stolen base percentages on turf), so this difference is probably due to the turf runner getting to second a little more quickly and the batter getting to first a little more quickly.

Passed ball rates are about 4% higher on grass. My guess is that this is just random and doesn't mean anything.

Range Factor Illusions

For years, I've been writing about the danger of reading too much into range factors. This subject came to my attention again this summer when I received my copies of two terrific books from STATS, Inc.: the All-Time Major League Handbook and the All-Time Baseball Sourcebook. Both volumes are packed with valuable information, including some that you cannot find anywhere else. They're very expensive, but I recommend both.

The All-Time Major League Handbook includes season-by-season fielding statistics for every player, and for each position they played. I'm not aware of any other book with complete fielding information. An early edition of Total Baseball had some fielding data for players with relatively long careers, but this section was dropped in later editions. The MacMillan Baseball Encyclopedia has some fielding data for some players, but it's not broken out by position. In the electronic world, the Fan Park Electronic Baseball Encyclopedia has good fielding data.

The All-Time Major League Handbook includes each player's range factor at each position along with the league average range factors for that position. This is very important, because it gives you a baseline against which to evaluate the player. I wish all sources included league averages to help us put player numbers in context.

Unfortunately, the range factor statistic has some serious weaknesses, and if you're going to use these numbers to draw conclusions about fielding prowess, you need to keep these weaknesses in mind. STATS computes range factors using this simple formula:

  Range Factor = (Putouts + Assists) / Games

Here are some of the reasons why you need to be careful about ranking players based on range factors:

Missing positions. STATS doesn't compile range factors for pitchers, catchers and first basemen, so if you're looking for a way to evaluate fielding at these positions, you're out of luck.

Playing time. The first problem with range factors is that not all games are created equal. Defensive specialists who often enter games in the late innings or are frequently lifted for pinch hitters are still charged for a full game played despite having many fewer opportunities to make plays than the starters. As a result, some of the best fielders in the game have their range factors artificially depressed.

For modern seasons, we use defensive innings to get a more precise measure of playing time. Defensive innings are just like pitcher innings. If you are in the field when an out is recorded, you are credited with one-third of a defensive inning. Defensive specialists and other part-time players are more fairly represented this way. But the STATS All-Time Handbook doesn't use this method because they didn't start compiling defensive innings until the late 1980s.

Grouping outfield positions. For most of baseball history, the official fielding records lumped all three outfield positions together. The STATS book is no different. Almost all of the CFs in the book have range factors higher than the league average. And most of the LFs and RFs are below the league average. But that's largely because (a) the STATS data groups all OF positions together and (b) more balls are hit to CF than the other two positions. So, if you're looking at a player with a range factor of 2.20 in a season where the average outfield range factor was 2.00, and you don't know which OF position he played, you don't really know whether he was better than average or not.

Strikeouts vs balls in play. If someone plays behind pitchers who strike out a lot of batters, fewer balls are put in play. In 1996, for example, the Indians fielders saw 4516 balls put in play (excluding homeruns), while the Yankees defense saw only 4348, in large part because the New York pitchers struck out 106 more batters. And the Athletics defense was presented with 4720 batted balls, almost 400 more than the Yankees. These extra chances can inflate range factors very quickly.

Ground ball percentages. In 1996, the Twins infielders had a crack at only 1746 ground balls, while the Indians saw 2148 grounders. On the flip side, the Twins outfielders got to chase 1956 fly balls, more than any other defense. Range factors made the Twins outfielders look like Gold Glovers, and their infielders look weak, but it had much more to do with their pitchers than their defense.

Left/right splits. Some teams have more left-handed pitchers than others, which usually means they face more right-handed batters than others. And right-handed batters are about twice as likely to hit a ball to 3B than to 1B. In 1996, left-handed batters (including switch-hitters batting left) accounted for 43.5% of the atbats, yet the Indians saw 50.1% lefties. That's a difference of 371 batters.

Statistical quirks. Not all putouts and assists are created equal. A second baseman, for example, gets a putout each time he makes the tag on a steal play or takes a throw from a shortstop on a force play. Neither play has anything to do with his range, and both can be artificially boosted by playing with a great throwing catcher or a superior defensive shortstop, especially when playing behind a pitching staff that puts a lot of runners on base.

Degree of difficulty. Some fielders might have benefited from an unusually high number of routine plays, such as lazy popups and soft line drives, while others were cowering under a barrage of screaming line drives. It stands to reason that you'll see a few more tough plays when you're playing behind an awful pitching staff than when Maddux, Smoltz, and Glavine are on the mound.

An example -- Ryne Sandberg versus Frank White. For all these reasons, it's very hard to look at range factors and determine how much is due to the player's ability versus external forces. One interesting example is Ryne Sandberg's 1983 season, when he led the majors with 571 assists, a fielding percentage of .986, and a range factor of 5.74. The average range factor for 2Bs was 4.53, meaning that Sandberg had about 27% more chances per game than the average 2B. From these numbers, you would be forced to conclude that Ryno had the best range of any second baseman that season. But there's a little more to the story.

Sandberg's Cubs had a predominantly right-handed pitching staff that year. Consequently, when Ryno was in the field, 45% of the batters who put the ball in play were lefties, compared to a league average of 40%. That translates into 225 more lefties than normal, and since lefties are much more likely to hit the ball to 2B than righties, he got quite a few more chances to make plays.

The Cubs staff was third last in the NL in strikeouts. As a result, Cubs pitchers put 4663 balls in play, second most in the league, and 75 more than the league average. Sandberg got his share of those extra chances.

Sandberg picked up a bunch of assists and putouts on double plays, and he led the majors in DPs by second basemen with 126. But the Cubs staff had the highest on-base percentage in the league, meaning that he had more than his share of double play opportunities.

Finally, and most importantly, the Cubs pitching staff led the majors in ground ball percentage, and that translated into more than 300 extra ground balls over the course of the season.

When you remove the effects of facing 225 more lefties, seeing 75 more balls put in play, picking up some extra PO and A on double plays, and seeing 300 more ground balls, Sandberg's numbers are not all that different from the league average. Most of his impressive range factor derived from the large number of opportunities presented to him.

Please understand that I'm not knocking Sandberg. I'm just using his 1983 season to point out that an outstanding range factor doesn't necessarily indicate outstanding range. It can also mean "decent range and good hands and an unusually large number of balls hit his way."

The season before, in 1982, Frank White had a range factor of 5.21, which was 0.51 above the league average. He was helped a little by a Royals staff that put 64 more balls in play than the average team. On the other hand, that staff included more than the normal number of lefties and flyball pitchers, and White saw 317 fewer lefty batters and 175 fewer ground balls than is normal for a 2B who played the same amount. So, even though Sandberg's 1983 range factor was much higher than White's in 1982, I would argue that White had a much better defensive season, because he produced a lot of outs despite many fewer chances to make plays.

Note: Total Baseball publishes a number called Fielding Runs that rates players based on putouts, assists, DPs and errors. Like the simple range factor, Fielding Runs doesn't adjust for left/right splits, ground-ball percentages and some of the other factors I've listed. According to Total Baseball, White cost his team 15 runs defensively that year. I couldn't disagree more. When you take into account the characteristics of his pitching staff, I figure he was the best 2B in the league. He won the Gold Glove that year, and I believe the voters got it right.

Adjusted Range Factors. To help us come up with accurate range ratings for our past season disks, I developed a new type of range factor that adjusts for these external forces. It measures playing time by counting balls put in play while each fielder was at his position. It counts only those PO and A in which a fielder's range was really being tested (such as turning a grounder into an out, or catching a line drive or fly ball), while ignoring plays that don't measure range (taking a throw on a steal play, catching a popup). It tracks balls put in play by left- and right-handed batters separately. It produces values for every defensive position and treats the three outfield positions separately. And it adjusts for the fly-ball/ground-ball ratio of the pitching staff. The result is a measure of range that eliminates most of the biases of the simpler range factors published by others.

But I'd be remiss if I left you with the impression that I believe these adjusted range factors are the ultimate in fielding statistics. They rest on the assumption that one can make a much better estimate of the number of opportunities to make plays by taking these external forces into account. At the end of the day, it's still an estimate. And, regrettably, it's not yet possible to compute adjusted range factors for much of this century because we don't have enough play-by-play data for older seasons.

Consequently, when I'm rating fielders for Diamond Mind Baseball season disks, I use one of three methods. For modern seasons in which we have access to detailed play-by-play data that includes the location of every batted ball, I use a technique that evaluates the number and difficulty of the chances each fielder was presented with. For slightly older seasons (1980s), where we have play-by-play data without hit location data, I use our adjusted range factors. And for seasons without any play-by-play data, I look at the traditional measures of assists and putouts per game and try to make mental adjustments for playing time, strikeout rates and other factors.

So I'm happy to see that STATS is publishing the fielding statistics and range factors in their All-Time Major League Handbook, partly because I like to see more attention given to fielding, and partly because it gives me more material to work with when rating players for older seasons. But, please, do yourself a favor. Before you take their range factors as a pure measure of defensive range, stop to think about some of the things that might be distorting the numbers.

Copyright © 1998. Diamond Mind, Inc. All rights reserved.

Understanding The Impact of Coors Field

Tom Tippett discusses the impact of Coors Field 

As you know, offense goes sky high in Coors Field. We can see that in the park factors and the home/road splits for individual players. Here are some hitting stats for the NL as a whole (including Coors), Coors only, the NL without Coors, the Coors numbers prorated to 700 plate appearances, and park-adjusted norms for players who play half their games at Coors.

                 NL                 w/o       Per      Half

              Total    Coors      Coors     700PA     Coors

G              1007       72        935       162       162

AB            69049     5194      63855       638       638

H             18184     1637      16547       201       183

2B             3367      305       3062        37        34

3B              418       59        359         7         5

HR             1917      241       1676        30        23

W              6668      509       6159        62        62

K             13309      880      12429       108       116

R              9329      975       8354       120       102

AVG            .263     .315       .259      .315      .287

SPC            .408     .536       .397      .536      .464

Runs/Tm/Gm     4.63     6.77       4.47                5.62

What can we learn from this? Quite a bit, actually:

  • If Coors Field wasn't part of the 1995 NL, the league batting average would have been four points lower and the slugging average would have gone down by eleven points. If you're playing in a Diamond Mind draft league that uses NL players but doesn't include Coors, you can expect your league averages to go down by the same amount.
  • Similarly, if Coors was not included in your draft league, the league average runs per game would go down by .16 runs. Factor in the unearned runs, and you can expect to see the league ERA go down by fourteen points.
  • The fourth column is the most compelling. It says that an average hitter with the opportunity to play 162 games in Coors Field would pile up MVP-type numbers. And this is not Barry Bonds we're talking about. This is the average hitter, including pitchers, middle infielders and September callups. Everybody.
  • The fifth column helps us evaluate real-life players who played in Colorado last year. It's a 50/50 weighting of the Coors numbers in column two and the non-Coors numbers in column three, prorated to 700 plate appearances. These are the figures that the average hitter would compile in 81 games at Coors and 81 games at the other thirteen parks. In other words, any Rockies player who posted those numbers (or the equivalent for the amount of playing time they really had) is merely average. If you move them to another park, you can expect their numbers to drop quite significantly. Keep this in mind when you evaluate your draft choices next time around.
  • Several members of the 1995 Rockies posted great numbers that were largely dismissed because of the park effect. Was this fair? Looked at in this light, it's clear that Burks had a below-average year. Castilla and Galarraga were a little above average. Bichette had a very good year and deserved consideration for the MVP award. Even if he played in the average non-Coors park, he would have hit about .312 with a slugging average of .553, numbers that compare favorably with Bonds, Gant, Sanders, Sosa, and Conine. I'd make a roster spot available for him anytime.
  • If the Rockies had played in a neutral park, their pitching staff would have finished around fourth in the league in ERA and their hitters would have been around ninth. So although they ranked last in pitching and first in hitting, it was superior pitching that carried them to the playoffs.