IF you read yesterday’s post and comment you will see that the Wine Spectator is becoming a little touchy about their scoring system. You will also note two things.
Although I did not give the score for a “classic” as awarded by the Spectator – 95 Mr. Matthews tells us – I apparently implied St. Hallet had one. Far as I know they don’t, but I don’t worry about scores of classics.
Mr. Matthews does not address the notion that sweet “dry table wines” score higher in their system. This of course is what the last paragraphs of yesterday’s post says clearly and what the headline promises the article says. But possibly Mr. Matthews was so incensed by my spelling error he merely forgot to clarify the magazines position on this issue.
So to give Mr. Matthews and his magazine something to address:
– I think the Wine Spectator taste tests are about as scientific as the Pepsi Challenge. (IF you wish to dispute this claim I suggest you read articles posted in the AAWE Journal of Wine Economics.)
– I think that wines of high viscocity (i.e. “thick mouth-feel”) and high sugar content are prone to higher scores.
If the Wine Spectator wishes to claim their scores are more valid than the Pepsi Challenge they need to implement a tasting regimen similar to that employed by the IVDP.
Finally I apologize to all readers for my erratic spelling particularly with French. Alas I am nowhere near bilingual and it shows just about every time I use French terms and names in my column. I also avoid diacritical marks (e.g. e-acute) because they do not carry well across international operating systems. I also avoid apostrophes for this reason though I find that slightly annoying as a writer.
So to properly answer Mr. Matthews sneer I suggest the Wine Spectator research testing systems that depend on rigour beyond brown paper bags.
Dear Dr. Booze,
Your argument remains unsupported by your evidence: you do not show. – even for the tiny data set of these four wines – that higher residual sugar invariably leads to higher Wine Spectator scores. This is not a “sneer” – it’s a simple observation.
Wine Spectator tastes blind to eliminate bias caused by label or price. Our critics apply standards of quality developed over years of tasting, thousands of wines and extensive field research. You may not agree with our judgments, but you have not yet presented compelling evidence that they are slanted in some simplistic way.
With all due respect,
Dear Mr. Matthews.
15 years ago I didn’t find it necessary to buy a set of Acuvin strips to make sure I wasn’t going crazy when drinking Californian Pinot Grigio. My sample set goes back a couple of decades but is of course highly personalized and therefore prone to bias.
Your tasting panel eliminates one large source of error but doesn’t correct for a couple more (taster accuracy, tasting design handicaps.) It is possible, that as you maintain your tasters are so solid this is redundant. This can be easily ascertained.
How about we get a couple of academics to run some stats? You provide a data set including manufacturer, label, vintage, wine type, and score. I’ll get the vintners to provide the data sheets when they bottled the wine.(Or if you prefer to keep the data controlled by yourself, you can get the vintners to provide the data.) The academics run the correlations. If your tasting panel is accurate there will be many correlations to chemical data points. This would prove your panel is reliable. (And possibly improve the WS regimen and reputation.) At that point people may argue over what is “great” wine but there can be no discussion of tasting accuracy.
I’ll publish the results regardless and indeed yell them from the rooftop, especially if your panel comes up roses.