University metrics – producing plywood?

A few days ago I attended a training course on journal metrics – however, the course was entitled with a more personal heading of  “Evaluate your publications” (emphasis added). I signed up to it hoping it would shed some light on all these indicators that we hear about every day but that, at least, I, don’t fully understand.

Whilst the course itself wasn’t that useful, it made me think, a lot, in a manner that I don’t think was the intention of the course organiser. The performance indicators that are being used to evaluate research are deeply meaningful and we need to start understanding them. It is both interesting and sad, that whilst many of us have spent a long time thinking and researching the meaning and the use of performance indicators in the neoliberalisation of public services, we have turned a blind eye to the same processes being applied to our very own academic work process.

I think there are two reasons for this. On the one hand, we are able to see the danger in others, and yet we have, partly, bought in the ideological construction developed by an increasingly neoliberal academia. On the other hand, we do see the point in being cited: we actually like to think that someone out there is interested in our work and that our work is somewhat useful.  This raises the following conundrum. Is the current obsession with citations and impact the correct way to measure this? Or rather is it an ideological construction designed to ensure that, as academics, we only develop certain types of research and behaviours?

There are different indicators designed to measure our research, or rather what is increasingly being called scientific production. These indicators primarily measure something they call “impact”. In a rather simplistic fashion these indicators measure impact according to how many times an article is cited in other articles that are published in equally citeable journals. This system designed by a librarian called Eugene Garfield (yes, like the cat! but not as innocent) measures impact according to how many times an article is cited in the two years immediately after its publication. After that, who cares?

It is an interesting measure. In my discipline, and I’m sure you may all be able to find examples in yours, the big names such as Max Weber, Antonio Gramsci, or even more contemporary ones such as Theda Skocpol or Charles Tilly, may have rarely been cited in the two years immediately after their work has been published. So what can we do?

In my training session the advice was clear: check what journals have the most articles published with the highest impact factors in your area of research, then target your publications to them. Apparently, carrying out high quality research and then look for a journal to publish it in is no longer in vogue.

The ideological discourse that is driving the idea that universities have too much ‘dead wood’ is bringing us to a future (and possibly a present) where universities may end up with too much plywood (to continue the metaphor). Wouldn’t it be better to create the necessary conditions for different types of wood to develop? Most of us may just become pine, but by allowing good quality research (whether it may or may not have a two-year impact) some oak or mahogany may appear. Surely creating the conditions where quality oak can shine, rather than mediocre plywood, is a much better scenario don’t you think?

Advertisements

One response to “University metrics – producing plywood?

  1. Pingback: It’s ok to fail | One Hundred Thousand Words

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s