I've been reading reviews and news articles from Pitchfork Media (there's a link under "Good Reading") for a while now. I read it mostly for news, trivia and the occasional review. Pitchfork has a very particular genre focus, mostly dealing with releases in the realm of 'alternative'/'indie'/'alt.' which would also include releases from hip-hop and electronica artists . And there's the occasional pop record (for example, they reviewed Kylie Minogue's 2002 release
Fever.) None-the-less, there is a definite bent towards the 'other' music of today's popular music.
The reason I've put it into the "good reading" link section of the blog is not because I necessarily value the contents, but because the contents are very interesting. For instance, the fact that they would review Minogue's 2002 album, but not any of the subsequent releases is interesting. Today there is a feature on Lily Allen's naive-pop/ska-pop/yob-pop album
Alright, Still next to an article on indie-rock/emo/pseudo-experimental
Pinback frontman Rob Crow's new solo album. I can't deal with those sub-genre's very well; suffice it to say the two are (at least stylistically) from very different places. So this is all very interesting.
Since I've started my occasional visits to the site I've had a look at the record reviews from the last month, not so much as to keep up-to-date but rather to check on
how many reviews were published. Consistently, over the past year, over 100 new reviews have been published on Pitchfork Media every month. So that's over 1200 new reviews in a year, which implies 1200 new albums every year. Of course, let's give some room for compilation albums and call it 1000 new albums in a year. Hell, let's call it 500... That's 500 new albums in a niche genre of popular music every year. That's more than one a day. What about the jazz, classical, and 'world' (I hate that term) releases?
So, there's a lot of new music being released. A
lot. What of it? Pitchfork Media gives a point score out of 10 to each review they make. In January 2007 there were 110 new reviews, of which I count 8 compilations and one retrospective. Of the remaining 101 releases only 3 received a rating below 5. What does five mean? I'd imagine that 5 would be a score that reflects something like, "It's okay; I don't know; If you like this sort of thing...;" or better still: Average. So only 3 releases in the past month were below average?! Well, according to the critics at Pitchfork the answer is yes. At random, I chose a month from last year: October. Surprisingly, there were also 110 reviews that month, but this time there were five releases that scored below average. May 2005: 100 reviews, 10 below average (ooo...a bad month.) September 2004: 105 reviews, 5 below average.
Ok, ok... so there's a trend. Not only is there a lot of music being released, but the vast majority is meeting with favourable (above average reviews). Now, I'm not a mathematician, but my little bit of education in social psychology, research statistics, general statistics, and natural science, combined with common sense is raising a warning alarm. The bell curve or natural graph (correctly called a "normal distribution") reflects values from random sampling. It's called the natural/normal curve because, naturally (i.e. generally in the natural world) the graph would look like this, all things being equal:
A perfectly normal curve would find 50% of the reviews with a score below 5, but the Pitchfork below average score constitutes less than 10% of the reviews. I'm not going to do the math on this, because I can't be bothered, but it is clear from the few sampled months that a curve generated from the Pitchfork reviews would be seriously skewed to the right (i.e. towards good). I found another website called
Metacritic which compiles weighted averages (i.e. some reviewers have more effect on the final score than others, but Metacritic.com won't reveal the weightings, which makes sense) from reviews across the internet and paper publications. A quick glance at the history of reviews on that site reveals a similar situation, where most releases receive above average ratings.
Ultimately, it seems that things are not equal. I can muster several possible reasons for this:
Firstly, things are not equal because reviewers and media publishers are already receiving filtered material to review; so, the bad stuff isn't even getting to them. If that's the case, then I have a problem with the fact that reviewers don't have a point of reference for comparison and neither do the readers.
Secondly, things are not equal because there are other motivations behind the good reviews (conspiracy theory!)
Thirdly, reviewers find themselves in an overwhelming position where it is actually difficult to tell the difference between good and bad. Or, worse, they're not prepared to give something a bad review because ... I don't know; no one wants to be the bad guy?
Fourthly, the music being released is actually all above average and worthy of critical praise, which would be great, but I hardly believe it's possible.
Fifthly, music is a socially
unnatural phenomenon, and so can't be referenced in normal analytic terms.
What's most likely to me is a version of the third explanation, that music has become so "samey" that it is actually very difficult to tell it apart from other music. A student of mine made a mix disc of songs she liked and I honestly have a very difficult time distinguishing between one group and the next.
My friend Robyn told me about a directing class she was in, where performances (
weak performances) given by fellow students were praised by the lecturer for their 'subtlety'. In response, Robyn pointed out (euphemism) that there is in fact a great difference between 'subtle' and 'arbitrary'/'meaningless'/'boring'/'random'. If there was (statistical) normalcy in the class, we'd expect some students to be adept, while others should be weak. Yet when weak is called "subtle" it suddenly become adept, because subtlety is very, very difficult to get right.
Perhaps the same situation exists in the review of popular music... the ability (and
will) to distinguish between strong and weak lies in acknowledging that the difference is so slight that it is very difficult to tell whether there are subtle strengths, or arbitrary weaknesses.