/cdn.vox-cdn.com/uploads/chorus_image/image/36715432/455644433.0.jpg)
I feel compelled to begin this article with a few disclaimers.
First: This article is three thousand words long and has very little to do with the Seattle Mariners. If you aren't even thinking about taking SABR101x, save yourself fifteen minutes and scroll down to the section that says "Conclusion".
Second: I paid $25 to take SABR101x. Partly it was because I wanted to be able to hang a fancy certificate on my wall announcing that I am a baseball-loving chump, but mostly it was because I think that a massive online sabermetrics course is an idea worth funding. Either way, my monetary contribution may have biased my expectations with respect to the course's content.
Third: Due in part to external time constraints set by other projects and in part to my own lack of initiative, I didn't begin work on any course materials until late June. At this point, the rest of the class was four weeks ahead of me. Furthermore, I worked mostly between the hours of 8:00 PM and 1:00 AM Hawaii time, which meant that the instructors were very rarely online at the same time that I was. As a result of these timing issues, I did not participate heavily in the SABR101x discussion forums. For all I know, the discussion sections and the help given by TAs and professors therein were wonderful additions to the class; for all I know, they were awful messes. The fact that I got nothing from them was my own fault, not a reflection on their quality. I will thus offer no further comment on them.
Finally: I am not an ideal candidate to take a 101 course on baseball science. I already knew almost everything covered in SABR101x's Sabermetrics and Statistics tracks. As such, I may be the wrong man to judge this material on its diffculty.
Now that that's out of the way...
SABR101x Course Eval
Background
SABR101x was split into four tracks: Sabermetrics, Statistics, Tech, and History. Each week, edX released a new "module" featuring five to eight subsections (one or two for each track, plus interviews with noted baseball analysts). The Sabermetrics track began with the origin of the word "sabermetrics" and worked up through various run estimators, concluding with an introduction to Replacement Level and WAR. The Statistics track began with some data science concepts, covered bivariate analysis and various related calculable coefficients, and ended with discussions of linear regression and sample size. The Tech track was split into two parts. The first introduced the database management system SQL, walking students through basic SQL terms like SELECT, WHERE, and JOIN. The second introduced R, a language designed to help interpret and graphically represent data arrays, along with RStudio, a MATLAB analogue that uses R. Finally, the history track covered one or two notable figures in sabermetrics history per week, starting with Henry Chadwick and ending with George Lindsey.
Each subsection was presented as a series of Youtube videos, with problem sets and questions between videos. Some videos had associated reading materials linked below them. With the exception of interviews, the videos were screen captures of PowerPoint presentations with notes written on them in real time and accompanying narration by instructor Andy Andres. For the SQL Tech sections, problem sets involved writing and running SQL code using BU's SQL Sandbox, a tool for online perusal of various preloaded database files. For all other sections, problem sets required students to answer multiple choice questions, check boxes representing correct statements, or type in answers. Each problem set had an accompanying discussion board. The course concluded with a four-part final exam covering most of the previous material.
General Comments
I mostly enjoyed SABR101x. It was well-constructed, clearly organized, and stably hosted. Andy Andres (the principal professor) is an engaging speaker. The course covered a wide range of topics, most of them thoroughly and well, and as far as I can tell contained no misinformation.
With all of that said: SABR101x was a seriously flawed academic experience.
For one thing, it was way too easy. Perhaps this shouldn't have been surprising, but man, when they called it 101 they meant 101. The course was split into six theoretically weeklong modules, each of which took me maybe four hours to complete. Most of that time was spent watching the lecture videos at double speed. (Thank goodness for the interface's fastforward option; Andres' lecturing was charismatic but, uh, stately). All told, watching all of the lecture material, doing all of the problem sets, and getting an A+ in the class took me less than 24 hours of work.
I will qualify this by directing you to disclaimer four above. I already knew a lot of this stuff. I also go to a very academically rigorous engineering school, so I'm used to a heavy workload. I have friends who just completed freshman years at BU, and even the ones in the honors program universally reported being unchallenged and bored by the intro classes, so maybe this is just the pace that Boston University 101 courses move at.
But I'm not so sure. For one thing, there was an obvious difference between the difficulty level of the main course and the difficulty level of the outside readings. (One three-page excerpt on run creation from Tom Tango's blog still has my head spinning). For another thing, a lot of the lack of difficulty stemmed from the lack of legitimate evaluation. The problem sets between videos were almost all totally trivial. Everything not done in the SQL Sandbox was a multiple choice, check-the-correct-statements, true-or-false, or fill-in-the-blank question. I understand why - it's not like I expected the teachers to grade 15,000 student essays on OPS - but man, those aren't legitimate forms of evaluation. Especially not when the student can just click back to a transcription of a lecture and find the answer immediately. Especially especially not when the student has three chances on every question.
Why did they feel the need to evaluate us between lectures anyways? No one's ever going to ask what I scored in an edX course. The only thing that might even slightly matter is whether or not I understood the material well enough to earn a passing grade, but the questions were so easy that presumably anyone paying even slight attention could've managed that. If they were trying to reinforce the learning from the lectures, they shouldn't have used question formats more suited to reinforcing guessing skills. Did they think I needed something to do between videos so I wouldn't fall asleep?
No. There wasn't any real reason for those checkbox check-ins; they were just there because some sort of standardized-test inertia has convinced professors everywhere that multiple choice questions achieve something. Expecting SABR101x to be an exception to the rule was borderline delusional of me. Still, it's a shame. I could've used the time I spent on all those stupid checkboxes for something more meaningful. Like writing a 3000-word course evaluation to put on a Mariners blog.
Oh, but the end-of-unit interviews were great.
From here on in, my more detailed feedback will be divided into four sections, one for each of the four tracks. Again, if you don't care about SABR101x, now would also be a good time to skip to the end.
Sabermetrics Track Comments
One would think that the Sabermetrics track of a course called SABR101x would be the heftiest, most challenging, and most complete part of the experience. One would, evidently, be wrong. SABR101x's title track dedicated far too much time to the theory behind historical antiquities and not nearly enough to the statistics actually used by modern analysts. If (as I believe) the average SABR101x student was more interested in understanding today's writing and doing some research of their own than in understanding why metrics like Bill James' Runs Created seemed great years ago, then some pretty significant balls were dropped here.
I understand the course creators' compulsion to start from the very beginning and work from there. (When I say the very beginning, I mean it. We're talking dictionary definitions of "sabermetrics".) But there really just wasn't enough material presented. Where was wOBA? Baserunning scores? Any pitching or fielding metrics? Seriously: this course spent hours on the runs-to-wins conversion, the differences between various run estimators, and the merits of Base Runs, but did not even mention fielding-independent pitching evaluation. What the hell?
I admit freely that I learned some interesting things in the Sabermetrics track. It turns out the reason OPS works despite OBP and SLG not having the same denominator is that breaking down the two expressions and simplifying everything yields an formula not unlike those used in Linear Weights metrics. Cool! Totally irrelevant to today's sabermetric writing.
Like that OPS factoid, the Sabermetrics track was an interesting bit of history. Also like that OPS factoid, the Sabermetrics track lacked sufficient relevance to modern baseball analysis.
Sabermetrics Track Grade: C
Statistics Track Comments
I don't really remember very much about the statistics track, which is probably telling. What little I do recall of the work that I did was pretty basic. I don't think any concepts were introduced that I haven't already used in articles on Lookout Landing, and those LL readers with actual degrees can tell you how little I know about real statistics. That I didn't learn a single new thing was more than a little disappointing.
On the other hand, maybe the reason the Statistics track was the least significant of the four is that even the most advanced sabermetric work being done today requires very little knowledge of statistics as a field. Think back to recent "big news" pieces of sabermetric writing: Mike Fast's pitch framing study, Lewie Pollis' $/WAR analysis, etc. None of them relied very heavily on statistical concepts more advanced than the coefficient of determination. The only really significant recent piece of work I can think of with a serious statistical side is Russel Carleton's series on sample size and stabilization rates - which SABR101x covered, and quite well. If the goal of the track was to start from the beginning and build up to the most intellectually challenging and significant work being done... mission accomplished?
Maybe the fact that SABR101x's Statistics track wasn't particularly substantial or challenging has less to do with the quality of the course itself and more to do with the current state of affairs in the online sabermetric community. Maybe there aren't many authors out there publishing freely available work that requires a real background in statistics. Maybe that's because teams keep hiring all of the people who are actually good at this stuff.
Hey, wait a minute--
Statistics Track Grade: B
Tech Track Comments
For me, the Tech track was a bipolar experience. The first half of it - the SQL half - was the undisputed highlight of the course. The second half - the R half - was the undisputed lowlight. Which is funny, because really there were only two differences between the halves.
The first difference was in scaffolding. Scaffolding is the art of arranging concepts in a sensible order and building towards unified understanding. A well scaffolded course moves at a good pace but lets students integrate new knowledge in the context of the things they've already learned. A poorly scaffolded course might be too repetitive, require significant and unannounced outside research, or introduce unconnected concepts in a seemingly random fashion. The SQL sections of SABR101x were well scaffolded. The R sections were, uh, not.
Professor Andres' SQL lectures started from the very basics of SQL coding and database management theory, then built logically. First, we printed a whole database. Next, we printed only data points matching criteria we had set. Then we printed data points in a specific order. Then we printed only data points matching criteria we had set, but this time in a specific order. Everything that was taught fit together. By the end of the three-week SQL sessions, it was feasible for one problem set question to ask us to employ literally everything we had learned. Better yet, the question was usually one that a real sabermetrician might actually want the answer to. It was a terrific way to be introduced to SQL.
In contrast, the course's R lessons were scattershot and disorganized. The things we were learning felt unrelated, and we didn't learn enough to be able to do anything of real use without copy/pasting in R code from the course website. Copy/pasting is a miserable way to learn; manually retyping is only slightly better. Ideas and functions were touched on once, never to show up again (except on the final, natch). The most egregious incident was a problem whose solution more or less required the use of the R operator %in%, which was not mentioned in any of the instructional material. Generally, it was an unpleasant experience. I don't think I learned very much R at all; certainly I did not learn to do more than one or two useful things that I can't do in Excel.
The other big difference-maker was the BUx SQL Sandbox. When the problem sets asked SQL questions, they were asking me to write my own code (often from scratch). The code's output was then analyzed for evaluation and feedback purposes, which meant that the interface could tell me what I was doing wrong. Actually writing code was an awesome break from the mindless, unfailingly easy multiple choice questions presented in the rest of the course. The R sections of the class tried to recreate the effect of the SQL Sandbox by asking me to perform operations in R and then report back some resulting coefficient. It didn't work nearly as well, because there was no easy way to get feedback on incorrect answers, and because asking non-entry level questions generally required either copying from edX or doing a lot of outside research.
I came into the Tech track with roughly equal ability in SQL and R, i.e. very little. I've dabbled before in SQL and its Python webdev counterpart Django, so I knew what a SELECT command is and more or less how a database file is structured. I didn't know any R syntax, but I had worked a bit in the roughly analogous MATLAB, so I had an idea of what the program I was using could do. By the end of the course, I felt that I had gotten exponentially better at SQL and more or less zero better at R. If you want to learn SQL, go audit SABR101x and play around in the Sandbox. If you want to learn R, go find some other online tutorial.
Tech Track Grade: B-
History Track Comments
I don't have a lot to say about the History track, which is a change. It was pretty cool. I liked the sabermetrician-a-week approach, which fit in very well with the course's episodic structure. The Youtube videos on historical baseball analysts represented some of Andres' finest work as a lecturer. Besides, lecture-based history courses work a lot better than lecture-based math and science courses just on general principle. Like the rest of the course, the History track was full of interesting factoids. (Did you know that the concept of linear weights for plate appearance outcomes has existed for almost a hundred years?) The sections were kept short enough that I never overdosed on history.
The only parts of the History Track I didn't like were the stupid pointless unnecessary questions between videos. I did my whole anti-multiple-choice bit 1500 words ago, but these questions were even dumber than the rest. Every time I was asked to parrot back the number of balls in play some sabermetrician or other counted in his efforts to build a run estimator, I wanted to punch someone. If you want to quiz me on the major concepts of a lecture, fine. If you want to make me scroll back through your video, freeze-frame it, and then tell you based on Allan Roth's dates of employment exactly how many years he worked for the Dodgers, go jump in a lake. I am never going to need to know that.
History Track Grade: A-
Conclusion
In retrospect, instead of all those words, I could've just written a table of pros and cons.
Pros | Cons |
the SQL Sandbox is a terrific learning tool and made SQL lessons a blast | the course was extremely introductory in all respects |
history sections brought a new (old?) perspective to sabermetrics | pitching / fielding metrics weren't covered at all; neither was wOBA |
end-of-unit interviews were generally fascinating | R and R Studio lessons were poorly scaffolded |
instructional videos were thorough, stably hosted, and fastforwardable | too many meaningless multiple choice / checkbox evaluative questions |
My official judgement of SABR101x? Whether or not you should take it depends on what you're looking to get out of it. If you want to learn about sabermetricians of yore, or SQL, hop to it as soon as possible. If you'd rather learn how to understand the work done by modern analysts, I'd have to recommend you save some time and take a look at the Fangraphs library instead. If you're interested in R, or in avoiding multiple choice questions, run away as fast as you can.
I am grateful to Andy Andres and the whole SABR101x crew for all of their work, and I think they have a really neat idea on their hands. Hopefully, they'll be able to improve their execution, so that the problems I had with their class are resolved by the time SABR101x is offered again.