Archive

Archive for May, 2015

Academia and citations


A colleague of mine on Facebook just shared this interesting link about citation practices in academia. The basic gist of it is that although a ton of articles are published annually, very few of them are cited. And even if an article is cited, it doesn’t mean that it’s actually been read. This means that the vast majority of articles go unheeded (and consequently uncited). Taken together, the article then argues that:

[The impacts] of most peer-reviewed publications even within the scientific community are miniscule.

In defense of these claims, the article links to a paper by Larivière and Gingras which looks at the proportion of papers cited once between 1900-2007, as well as a few other measures of citation concentration. With specific reference to Humanities, the authors make the following point:

The very low percentage of articles cited at least once may be a reflection of the tendency of humanities researchers to cite books instead of articles. All in all, these data strongly show that, in all fields except HUM, fewer and fewer of the published papers go unnoticed and uncited and, consequently, science is increasingly drawing on the stock of published papers.

So…. in all other fields the authors look at (Natural Sciences and Engineering, Medicine, and Social Sciences), an increasing publications are being cited at least once and citations are expanding out beyond a narrow group of papers. This will probably get only better since the landscape of dissemination has changed quite considerably in the timeframe the authors look at (1900-2007). Even in the past eight years, a whole bunch of resources have emerged which make it even easier to share one’s work. For example, the cut-off of 2007 is a few years before social media sites like academia.edu (2008) and orcid.org (2012) were published, before social bibliographic programes like Mendeley (2008) came along, and just on the cusp of resources like Facebook (2004), Twitter (2006) and Zotero (2006).

To get back to the results, the fact that the LSE blog has taken only one wee bit of the study and ignored the more general picture of increasing citation counts in other fields is a bit odd, especially when the authors of the paper come to the conclusion that:

Though many factors certainly contribute to the observed trends, two things are clear: researchers are not increasingly relying on recent science, nor are citations limited to fewer papers or journals (my emphasis).

I always tell my students to make sure that their evidence points in the same direction as the claims they’re making. Just sayin’. Although perhaps to be fair, the blog does say “82 percent of articles published in humanities are not even cited once.” That said, this point kind of overlooks the fact that citations practices are a bit different in this field compared with, say, sciences. In humanities (I’m reliably informed by literature colleagues) the book is king; journal articles are seen as a bit of a sideshow really. This gets a bit frustrating given that within sociolinguistics, articles are pretty important. But anyway, a more relevant point here is that even if journal articles in the humanities aren’t cited, that doesn’t mean that the research in them isn’t covered elsewhere (in a book, for example). The research gets out there, but in a different format.

Anyway, moving on. The LSE blog then makes the (startlingly bold) claim that:

Many scholars aspire to contribute to their discipline’s knowledge and to influence practitioner’s decision-making. However, it is widely acknowledged practitioners rarely read articles published in peer-reviewed journals. We know of no senior policy-maker, or senior business leader who ever reads any peer-reviewed papers, even in recognized journals like Nature, Science or The Lancet.

I’m sure that many academics do actually want to influence decision-making. But I’m perhaps a bit more skeptical of the last claim that research articles definitely aren’t read by policy makers etc (again, I tell my students to avoid subjective reporting as objective fact). Of course, it’s an easy argument to make that work put behind pay walls limits access (as the LSE blog does and as I do in my recent chapter on some of these issues), but with the Open Access movement, pre-print publication and so on, the barriers to accessing research outputs are slowly being lowered. Add to that the increasing social media presence many academics are taking on (see my upcoming chapter for details), there are certainly many more avenues for potential impact.

To address the issue of ‘low dissemination count’, the blog calls for ‘brevity’ in research outreach, either in summary papers or press releases through the media. In more detail:

No decision-maker would ever ask for summaries regarding publications and discussions in academic journals. If academics want to have impact on policy makers and practitioners, they must consider popular media, which has never been easy for scholars. This in spite of the fact that media firms have developed many innovative business models to help scholars reach out.

I think this is dangerous for a couple of reasons. ‘Brevity’ gets rid of a lot of the detail that’s important in academic research, with its conditions and caveats. What you gain in ‘accessibility’ you (potentially) lose in rigour and robustness. The media is, for all its benefits, an unreliable source of dissemination and one where academics have very little control over the final product. Why on earth would a policy maker want to rely on a journalist’s interpretation of a piece of research that said journalist probably knocked up in an hour or so (just to make a deadline mind)?

But quite why its academics’ responsibility to do all the leg work is kind of beyond me. Surely those making decisions, decisions which will affect peoples’ lives, have a responsibility to have all the facts at hand so they can make informed decisions? Do we want to be half-arsing summaries of our research by just going for the ‘juicy details’ to satisfy some policy-wig who has the attention span of a hyper-active hamster? I’m not sure we do…

Moreover, who is to say that one’s research will even end up influencing policy? Only research which aligns with the priorities is likely to be taken up (essentially, research which helps the policy maker spin support their story). Can you imagine a piece of research commissioned by the Government which goes against their plans having an effect of changing public policy?

The last point in the article that I do (kind of) agree with is the need to reassess scholars’ performance. As Andrew Pettigrew has argued several times, adopting a ‘portfolio’ model of research activity could be a way to go. So instead of just focusing on published journal articles, things like scholars’ effect on policy implementation and design, contribution to public debates etc should form part of evaluating scholarly contribution. But such evaluation and measurement makes me uncomfortable, since it sets fairly arbitrary standards and ultimately creates a culture of distrust within academia. And what if an academic doesn’t do this kind of sexy, flavour of the month work? That should be ok. Universities should value knowledge of whatever stripe, not simply prioritise research which speaks to the most current funding call.

This all might seem a very down in the dumps kind of post, but it needed be. We now have ever more tools at our disposal to wrest control back and communicate our research on our terms, and we need to remember that. Perhaps more important, we need to get our voices out there and contribute to ongoing discussions about knowledge¹. We may be ignored, but better to speak up and be ignored than stay silent and remove all possibility of being heard.

The Social Linguist

1. Although I freely admit this may be a conversational dead end – all the moaning in the world by academics about RAE/REF didn’t change anything…