Yesterday I examined the correlation between Spring Training and regular season batting averages. Today: slugging percentage (both raw and isolated).
Please note that I made a mistake in the batting average spreadsheet, and that the R value is actually 0.24 instead of 0.21.
Anyway:
Now let's go to a little table comparing these R values with those recovered from year-to-year (rather than ST-to-year) correlation studies. Thanks to James Click for publishing these two years ago. R(1) refers to the ST/season values, while R(2) refers to the season/season values.
The academic statistician in me says "all those Spring Training correlation values are significant at the 1% level." The more reasonable part of my brain points out that, yeah, that much is true, but they're also all really low. The highest R(1) value (ISO) is lower than the lowest R(2) value (BA), and anyone who's read at least one Baseball Prospectus article in his/her life could tell you that batting average is considered too volatile of a statistic to have very much predictive value on a year-to-year basis.
Statistically speaking, there is a significant relationship between Spring Training and regular season performance, in that the numerical distribution isn't totally random. The evidence shows that good ST numbers are more likely to be put up by good players than bad players, and vice versa. The correlation, though, is very weak, and when you run into something like that, it's usually just best to ignore the numbers as much as possible, because they aren't worth very much.
What it comes down to is that, given a decently-trained eye, visual observation will tell you more about a player's development over the course of the winter than his springtime statline. Which probably isn't something you didn't already assume, but it's always nice to have closure.