Categories
Research Scientometrics

Relative Citation Metrics

This is the first post in what I hope will be an ongoing series in scientometrics.

Today’s topic is an excellent paper -and idea- by Hutchins, Yuan, Anderson, and Santangelo called the Relative Citation Ratio (RCR).

The RCR is part of a new wave of citation metrics that aims (broadly speaking) to correct for the well-known shortcomings of the Journal Impact Factor (JIF). The idea behind the RCR, and proprietary equivalents like Elsevier’s Field-Weighted Citation Impact (FWCI), standardized via Snowball Metrics, is to adjust citation counts for the size of the field. In other words, publishing in JAMIA (impact factor of around 4.3 at the time of this writing) has a much-smaller built-in audience than publishing in JAMA (impact factor=51 and change). How do we account for audience size when measuring how impactful/important a paper is?

The answer that the RCR and FWCI propose is to measure the size of the field, and then tell us how much -or little- a paper is cited compared to the average of the field. In other words, if my paper is cited twice as much as other papers in the same field, my paper’s score is 2.0. With this in hand, I can compare the relative impact of papers across fields.

Even better, the “field” is computed dynamically, at least in the RCR’s case, by looking at papers co-cited with the one we’re focusing on. So you can assume that someone, somewhere, with expertise decided that these ideas were worth meshing together.

Overall, I think this is great, and much better than looking at “raw” citations. However, I also see weaknesses in it. They stem both from the methodology and from the general idea.

The latter is easier: If I publish in the New England Journal of Medicine and get 100 citations, it still means I published in the freaking NEJM and got 100 people to cite my work! Sure, it may be so-so compared to other NEJM articles, but it still influenced other people who then went and used my idea. I struggle with thinking that this is “the same” as a putative paper with 2 citations in the Journal of Obscure Armadillo Studies. In other words, using relative citation metrics we’ve found a way to compare apples and oranges “fairly”, but we may be comparing them by something akin to how much they weight; meaningful under some circumstances, but not really speaking to inherent value. If I have 300 grams of oranges, and you have 300 grams of apples, which fruit is better?

The methodological weaknesses are (IMO) from the lack of transparency. You can’t really compute RCR and similar measures without a large dataset, a citation network, and a computer. As such, it’s opaque, and works as an oracle – whatever it says, goes, and it’s hard to check the work. You just will have to trust it. This is especially painful because there’s very little publicly-available citation data, and most of it is behind expensive paywalls at Elsevier, Clarivate, or others. Further, just like any other citation-based metric, it will change as new citations appear… so yesterday’s paper can’t be judged until a couple of years have passed, and today’s RCR will be different tomorrow. We should timestamp these things.

With all that said, it’s a better impact metric than raw citations or JIF, and for looking at scientific productivity broadly -for the 30,000 foot perspective- it’s definitely better. The importance of removing the effect of field size can’t be underestimated. In essence, it’s got “context” built in, and for committees or executives looking at quality of scientific output across a program, department, or institution it should prove extremely useful.

For judging a single paper or author… there’s no replacement for careful, qualitative peer-review.

RCRs, and a free, open -albeit somewhat small- citation database, are available at the NLM’s new, excellent iCite site/tool.