There was quite some excitement emanating from our journals desk last week when the latest ISI impact factors were announced. Our Journals Executive, Kim Eggleton, explains to the uninitiated what all the fuss is about and queries the future of any measurement system…
If you follow us on Twitter you’ll have seen we got quite excited last week about the latest ISI impact factors. Two of our journals are included on the prestigious ISI list (also known as Social Science Citation Index, or SSCI), and this year we’ve seen them both improve. Policy & Politics now has an impact factor of 1.302 (an increase of 72%!), and Evidence and Policy has an impact factor of 1.222. But why does this matter?
The impact factor is based on a calculation of citations divided by number of papers published.
As such, a journal with more citations has a higher impact factor. The higher the impact factor, the better the journal (in theory). A high impact factor suggests more researchers are reading this journal and finding its content useful in their work. And it does ring true in many cases – the journals we tend to think of as prestigious do have high impact factors.
A high impact factor leads to an improved reputation, and many journals will see an increase in submissions, subscriptions and downloads as a result of an improved impact factor. This is why Publishers, Editors and authors all get very excited when it’s good news.
However, many people agree that measuring citations alone should not be the only indicator of content quality. The very many ranking systems and lists throughout the world (major examples include the ABS list in the UK and the now abandoned ARC list in Australia) still follow the citation rankings avidly, and down the chain researchers are pressured to publish in these high ranking journals.
“Researchers can only publish in top ranked journals, and the number of top ranked journals is limited. What’s a researcher to do?!”
This creates a catch-22 situation for many publishers and researchers, all unhappy with the current system but trapped in its legacy. Researchers can only publish in top ranked journals, and the number of top ranked journals is limited. What’s a researcher to do?!
There’s no one answer, and the dynamics of Open Access compound the issue further. But what is clear is that there’s more than one way to measure quality. Quality is about utility – after all, many people got into academia to make a difference. How many people are reading your work? How many of them are policy makers or practitioners? Has your work changed society in some way?
There are many alternatives coming onto the market now, ready to shake up this age-old system. Altmetrics scours social media sites, newspapers, government policy documents and other sources for mentions of scholarly articles, creating metrics at article level rather than journal (after all, a good journal can have a bad paper, and vice versa!). Kudos, a service we’re proud to have just partnered with, helps authors promote their work to these important audiences. COUNTER is developing the Journal Usage Factor, designed to measure a journal’s reach through downloads.
So while we’re delighted with our latest results and know this is testament to some serious hard work on the part of the editorial boards (thank you!), we know there’s more to it and that the impact factor only reflects utility in academic circles. That’s why we go to extra lengths to get our content to a variety of audiences – we regularly make articles free, our journals have twitter accounts and blogs and facebook pages, and we make space in our titles for content relevant to these audiences. We do think there’s more to life than the impact factor, and ultimately we want our content to make a difference in the wider world too.