Scores Still Kinda Suck – Now With More Better Science?

There’s been a good bit of discussion lately on the Global Interwebs over a recent blog post by the wine-data-focused David Morrison (to which I was alerted by intrepid friend-of-1WD Bob Henry).

In that post, Morrison puts the scores of two of Wine Spectator’s then-critics-both-named-James, James Laube and James Suckling, through the data-analysis wringer, focusing on scores they gave to wines as part of WS’s “Cabernet Challenge” of 1996.

Generally speaking, Morrison’s blog post, while enviably thorough, can justifiably be criticized as much ado about nothing, considering that no one in the right minds could draw any statistically relevant conclusions from such a small data set. The summary version is that he found a high level of disagreement in the scores that the two Jameses gave to the same wines. Morrison draws out some interesting suggestions from this finding, though, primarily about the use of numbers when evaluating wine quality; to wit (emphasis is mine):

“The formal explanation for the degree of disagreement is this: the tasters are not using the same scoring scheme to make their assessments, even though they are expressing those assessments using the same scale. This is not just a minor semantic distinction, but is instead a fundamental and important property of anything expressed mathematically. As an example, it means that when two tasters produce a score of 85 it does not necessarily imply that they have a similar opinion about the wine; and if one produces 85 points and the other 90 then they do not necessarily differ in their opinion.

So… where have we heard that before?

Oh, that’s right, we heard it right here on 1WD. Several times, actually…

Morrison gets to his point a different way than I did (and by that, I mean not only via data analysis, but also more eloquently and in about one-third as many words), but the point remains the same: specific numeric values are just a sucky way to talk about subjective experiences (something that the medical field has known for a long, long time), and wine criticism will always have large subjective elements baked into it.

Here’s a recap of my version of a similar conclusion (with newly-added emphasis):

“Wine ratings are most often presented via scales that imply scientific precision, however they are measuring something for which we have no scientifically reliable calibration: how people sense (mostly) qualitative aspects of wine. Yes, there may be objective qualities about a wine that can indeed be somewhat calibrated (the presence of faults, for example) but even with these we have varying thresholds of detection between critics. That’s important because it means that the objective (i.e., measurable) quantities of those elements are not perceived the same way by two different reviewers, and so their perception of the levels of those elements cannot reliable be calibrated.

But it’s the subjective stuff that really throws the money wrench into the works here. How we perceive those – and measure our enjoyment of them – will likely not be fully explainable in our lifetimes by science. That is because they are what is known as qualia: like happiness, depression, pain, and pleasure, those sensations can be described but cannot effectively be measured across individuals in any meaningful way scientifically.

Yes, we can come to pretty good agreement on a wine’s color, and on the fact that it smells like, say, strawberries. After that, the qualia perception gets pretty tricky, however: my perception on how vibrantly I perceive that strawberry aroma might be quite different from yours. Once that factors into how you and I would “rate” that wine’s aroma, we start to diverge, and potentially quite dramatically at that.”

Add to this quagmire the penchant of humans to treat numeric values as fungible (see Morrison article quote above), and you have a recipe for a not-so-great consumer experience when using specific numbers to rate a wine, and then comparing those specific numbers across critics, particularly when those numbers are stripped of their original context (which is, oh, just about every time they are presented…).


Grab The Tasting Guide and start getting more out of every glass of wine today!

Shop Wine Products at

Copyright © 2016. Originally at Scores Still Kinda Suck – Now With More Better Science? from - for personal, non-commercial use only. Cheers!

In Defense Of White Wine (Thoughts On Expert Scores And Red Wine Bias)

White wines get the review shaft (image:

A little over a week ago, my friend Jeff Siegel published details by PhD Suneal Chaudhary, who analyzed over 64,000 wine scores, dating to the `70s, from “major wine magazines.” The study’s aim was to ascertain if red wines routinely receive higher point score reviews than white wines (other styles were presumably ignored).

Long-time 1WD readers know that I have become a big fan of statistically relevant data, and the data in this case (including how those data were handled) are, for sure, statistically relevant, in sample size, time duration, and applied analysis.

It’s dangerous to draw too many conclusions, but Jeff summed up the congruence of the findings with the common sense experiences of wine geeks everywhere nicely in his original post on the subject:

“We don’t pretend that these results are conclusive, given the variables involved. Red wines may be inherently better than white wines (though that seems difficult to believe). They certainly cost more to make, and that might affect the findings. The review process itself may have influenced the study. Not every critic publishes every wine he or she reviews, and those that were published may have been more favorable to reds than whites. And, third, the scoring process, flawed as it is, may have skewed the results regardless of what the critics did.

Still, given the size of the database, and size matters here, Suneal’s math shows something is going on. And that’s just not our conclusion. I asked three wine academics to review our work, and each agreed the numbers say that what is happening is more than a coincidence. That’s the point of the chart that illustrates this post – 90 percent of the 2010 red wines that we had scores for got 90 points or more.”

What to make of all of this?

Personally, I think that we wine geeks ought to be a bit more flabbergasted at the discrepancy, considering that, in general, white wines are superior to reds aromatically…

Yeah, I did just write that.

And I meant it, too. More on that in a couple of minutes.

Jeff has since published a thoughtful followup post detailing some of the reactions to the study (I should mention here that both posts, and the study details, are well worth a read, and I don’t mean the I’ve-got-fifteen-minutes-after-lunch-at-the-office kind of read, I mean the kind of read where you have few distractions, an hour or so of free time, and an open mind… you know, the same way that you read every post here on 1WD, right???). What he found was not all sunshine and unicorn-farted-rainbows, either:

“…[there] was a common theme among the comments, emails, and discussions Suneal and I found – that only wines made with serious grapes deserve the best scores, and the only serious white grape is chardonnay (and don’t even think about mentioning rose). So, according to this argument, why should anyone be surprised by any kind of bias? It’s only natural and right… Which, of course, made me very sad – the some animals are more equal than other animals theory.”

If that doesn’t also make you, as a wine lover, sad, then I’d posit that you need to drink a little bit less, sober up, and read it again. Because the thinking that the only serious fine white wine grape is Chardonnay is patently ridiculous. Like, get-the-funny-shoes-and-red-horn-nose-you-clown ridiculous.

It is certainly true that not all fine wine grapes are created equal; there are simply so many grape varieties made into fine wine, and statistically not all of them can produce wines that contain enough aromatic, textural, and flavor complexity – let alone potential age-worthiness and harmony – to be considered among the best wines in the world.

Here’s the rub, though – there is no general Aristotelian avatar of perfection when it comes to wine. You can get close to an idea of perfection for wine made from iconic grapes and regions, but when comparing wines beyond that generality, that perfect Aristotelian avatar image starts to get very fuzzy, very quickly.

I’ve tasted (ok, and drank) a few lifetimes worth of wines at this point. As one might expect, I’ve yet to find a white wine whose textural complexity matches those of the best reds in the world; you’d expect that because the process of creating a red wine’s texture is, itself, usually more complex than it is for white wines. And I’ve yet to encounter a red wine whose aromatic complexity consistently bests the world’s best whites. Which is why I try to explain so often to hapless, glazed-over-eyed conversation victims people that Riesling gives you more bang for your buck than, say, Cabernet Sauvignon.

And, ok, the cynic in me totally wants to shout “those reds only got higher scores because they have more alcohol in them, and all of those wine magazine critics like boozy shizz!” But that would be kind of rude, probably…


If the study results that Jeff so eloquently and effectively wrote about have anything to teach us wine geeks, it’s that we shouldn’t be afraid to consider the best white wines in our collection as on equal – and in many cases, superior – footing to whatever reds other people are bringing to the tasting party. White wines are not also-rans, no matter what the scores reflect; like women vs. men, we are talking about a yin and yang of sometimes opposite but often complimentary and equally powerful and equally, well, perfect in their own imperfect ways.


Grab The Tasting Guide and start getting more out of every glass of wine today!

Shop Wine Products at

Copyright © 2016. Originally at In Defense Of White Wine (Thoughts On Expert Scores And Red Wine Bias) from - for personal, non-commercial use only. Cheers!