This article is part of our The Z Files series.
Concocting this piece has been a lot more difficult than I thought it would be. Don't worry, it's not a farewell note. It's not another mea culpa.
Here's the deal. There is more data freely available than ever before. Much of it is considered next level, with the expectation it will lead to previously unearthed truths.
There are also more individuals accessing and analyzing that data, many offering actionable advice. This includes industry brethren, as well as fantasy players who enjoy offering their opinion. I understand the desire to be first with sage analysis, as well as seeking public attention and recognition.
Tying this together is the multitude of means to spread the word. Be it social media, podcasts, or radio, if you want the public to know what you think, you can easily find a way.
The objective, of course, is to make the right moves for one's fantasy roster. Well, that and telling people how to make the right moves.
The problem is, wanting Statcast data and the like to be predictive does not make it so. The more advanced nature of the information provides the false sense it is more meaningful than intended. Narratives are forcing a square informational peg into a round analytical hole.
Tom Tango said it best a few years ago on Twitter. I'm paraphrasing, but his message was that the words, "It's a small sample" should never be followed with "but". If it's a small sample, there is no but. "It's a
Concocting this piece has been a lot more difficult than I thought it would be. Don't worry, it's not a farewell note. It's not another mea culpa.
Here's the deal. There is more data freely available than ever before. Much of it is considered next level, with the expectation it will lead to previously unearthed truths.
There are also more individuals accessing and analyzing that data, many offering actionable advice. This includes industry brethren, as well as fantasy players who enjoy offering their opinion. I understand the desire to be first with sage analysis, as well as seeking public attention and recognition.
Tying this together is the multitude of means to spread the word. Be it social media, podcasts, or radio, if you want the public to know what you think, you can easily find a way.
The objective, of course, is to make the right moves for one's fantasy roster. Well, that and telling people how to make the right moves.
The problem is, wanting Statcast data and the like to be predictive does not make it so. The more advanced nature of the information provides the false sense it is more meaningful than intended. Narratives are forcing a square informational peg into a round analytical hole.
Tom Tango said it best a few years ago on Twitter. I'm paraphrasing, but his message was that the words, "It's a small sample" should never be followed with "but". If it's a small sample, there is no but. "It's a small sample, but…" connotes the "but" is meaningful. Why? Because we want it to be, not because it is.
One of the culprits is the misapplication of stability points. I've discussed this before, so I won't belabor the point. Stability is a misnomer, even if the loose definition of stability is accepted (luck to skill ratio in a sample is 50:50). If the stability point is denoted as "X" units (plate appearances, innings, games, etc.), the luck and skill within X units is equal. It does not mean the performance within those X units will be the same for the next X units, or any set of X units. Data with shorter stability points are currently being misapplied. In the interests of full disclosure, several years ago I was leading this brigade, so I'm not blaming anyone for the misuse, unless they've ignored others pointing out the error of their ways.
Again, the driving force is the insatiable desire to manage our rosters and not sit still while practicing excruciating patience. If we don't pick up the so-called hot player, someone else will. We look for a reason to rationalize doing so, pigeonholing information perceived to be intuitively actionable but unproven to be statistically significant. Often, the "right" decision is made, leading to a positive outcome and justifying the action.
This is where my previous life in science comes into play. For years, the following has been my signature in message forums:
I'd rather be wrong for the right reason than right for the wrong reason.
The problem is, whether the process leading to picking up a player was right or wrong, the standings are agnostic. A lucky home run counts just as much as a no-doubter. A .300 average driven by a fortunate BABIP counts the same as one fueled by a high contact rate coupled with an elevated hard-hit rate.
Especially in today's instant gratification landscape, a fantasy manager cannot sit idly, waiting for water to find its level. As this point in the season, the correct process is almost always to do nothing. That's easier said than done though when your fantasy teams are leaking points, or when we yearn for public acclaim by getting players "right".
Some of you are probably thinking, "Yeah, but stuff like a change in velocity or spin rate is meaningful." Or maybe, "He's throwing more sliders and fewer changeups, resulting in a higher swinging strike rate."
I am too. My approach for player evaluation is, "I need something tangible to act." However, it's usually rationalization and not supported analysis. The reasons are often intuitive, but there are countless instances of false intuition. The most blatant example could be DIPS theory. When Voros McCracken first suggested pitchers have limited control of the fate of a batted ball, it blew up a long-standing assumption based on false intuition. Sure, that thinking has been refined, but the overall notion stands. A batted ball off Pedro Martinez or Pedro Astacio had nearly equal odds of being a hit.
Muddying the analytical waters is that so many factors are influencing play, so assigning cause and effect is scientifically impossible. Let's use a drop in pitcher velocity as an example. It is logical to be concerned as lower velocity often manifests less effectiveness, not to mention it could be a harbinger of injury. Chances are, a sustained velocity decrease is detrimental, so the question becomes, will it revert to normal levels? With all this newfangled information and the means to access it, we can compare a pitcher's numbers to a similar time frame from previous seasons. If it was low but then recovered, there is less of a reason to worry. However, if it wasn't low, that doesn't always mean there's an issue. The unique offseason followed by the truncated spring training could be a factor. We still don't know much about the 2022 baseball. Some hurlers are allegedly finding ways to enhance their grip despite MLB's efforts to curtail the practice. Humidors could be altering the feel of the ball, adding or removing moisture based on the locale and previous storage conditions. At the end of the day, we need to decide based on the available information, even if the thinking isn't always supported by science.
As mentioned, I understand the need for small sample analysis – I'm doing it on a personal level for my teams, and at a professional level on the radio, podcasts, etc. Right or wrong, it must be done, and we deal with the consequences. Hopefully, we learn from rash decisions and stay grounded even when we get lucky.
The more I think about it, the more I'm realizing this is a me thing. I'll be honest, it's the misleading presentation of analysis in a factual manner driving this missive. And really, why should you care? Maybe I should have stuck to explaining the perils of small samples while pointing out all the variables and how you can't focus on one while the others are all exerting immeasurable influence.
I thought about jettisoning this file into the ether, but selfishly it's been therapeutic, so I'm going to post and send it to my editor, wishing him luck in trying to come up with a photo for the tease. I'd also like to thank you for your indulgence. Trust me, I'll get back to the usual fodder next time.