The debate over gaming’s six to ten rating problem rages on and the current spokesperson for the prosecution seems to be Metacritic co-founder Marc Doyle. Having recently revealed that Metacritic has de-listed games review sites in the past for ‘corrupt practices’, he has also taken time to suggest that, actually, the games review scale is being undermined by the fact that not enough games journalists are “review[ing] all the sh*t”.
Clarifying his viewpoint over at Gamepro, Doyle claims that reviewers do not have a clear idea of what constitutes the difference between a game that scores one of ten, and a game that scores two of ten. This contrasts starkly with how reviewers can place a game as an ‘eight’ or ‘nine’ (or even as an ‘eighty nine’ and a ‘ninety’) with precision. The solution in his eyes? Reviewers should play more bad games, so they understand fully the subtleties of lower end games reviewing.
Personally, his words have me somewhere between ambivalence and apathy. On the one hand, he’s probably not wrong in saying that publications devote their time to reviewing at the upper end of the scale. I’m sure our staff (and readers!) who’ve freelanced and work-experienced their way into major sites and mags are probably more than familiar with being handed writing tasks that are barely a step sidewise from tea-room duty: bad games are begrudgingly reviewed. Even here at the Reticule, (where we hope we’re offering a varied look at the gaming landscape) we naturally gravitate towards big name releases and those that are most likely to give us an enjoyable experience. Or at least, we hope for a game that will give us something interesting to talk about. (Hey, if you’re already working full-time, making time in your week for a miserable ten hour ordeal isn’t something you do lightly. If only all bad games were obligingly small).
But criticising reviewers tendency to focus on good games does nevertheless seem to be an odd bone to pick. I’m not crazy about the ten point system, but surely the ‘problem’ would be better solved by critics focusing on further differentiating games that are worth playing? As it is, the ten point scale has room for five or six degrees of ‘not worth your time’. It’s my opinion (and also, the opinion reflected by our own review system), that you only really need two: bad and exceptionally bad. Fleshing out those ‘five or six degrees of not worth your time’ wouldn’t just be a pointless task, it’s specifically something that other mediums don’t bother with either (and the rhetoric behind rating change is supposed to be in aid of matching their coherence of reviews in other mediums). Only a review aggregation site would desire that kind of detail, really. Thoughts in the comments thread, folks.