Saturday, March 7, 2020

The most pernicious fallacy

UK Higher Ed loves metrics. REF. TEF. KEF. QS rankings. (I've written before about other types of rankings we might consider...)

Of course, there is no easy way to collate and evaluate the data necessary to make these different type of ranking systems robust. The QS rankings are made by surveying academics about their perceptions of other departments. I received a request to participate in this manner a few years ago, took a look at the survey, realised how entirely inadequate my knowledge was to allow me to give informed responses, and promptly decided to never contribute to these surveys ever again. Plenty has been said about how graduate earnings and employability are no indication of the teaching quality of the course the person graduated from, and neither is student satisfaction, and yet these are all factors that are taken into account in the TEF. I don't know enough about KEF to say anything about the methods it uses, but I'm sure they're just as problematic.

But in this post, I want to talk about REF and the proxies it uses.

The point of REF is to grade the research outputs of individual departments as a means of determining how to allocate money to departments, rewarding ones that are good and punishing ones that are bad. In an ideal world, the research produced would be read carefully and evaluated by panels of experts who have sufficient time and expertise to do this, and are commensurately recompensed for it. We all know that we're not in an ideal world, and that this doesn't happen: There are insufficient experts on the panels and they are given insufficient time to be able to read and evaluate all the work they're given in a careful and calm manner. This isn't something unique to REF and REF panels too -- it's the case any time research has to be evaluated, e.g., by promotions and progression committees, or on grant evaluation panels, or on hiring panels.

As a result, proxies have to be developed. Even though people are not supposed to take publication venue into account when determining the research quality of a piece, the fact remains that venue matters. After all, Philosophical Quarterly is a highly prestigious journal, accepting only a very small percentage of submissions it receives -- so if a paper has managed to jump that high bar, it must be a good paper, right?

But the fact is, the prestige of a journal is a supervening property, not something endemic to a journal. A journal receives its high prestige from the quality of the papers it publishes. That this is the case can be clearly demonstrated: If a currently high-prestige journal started publishing rubbish, then the consequence of this would be that the prestige level of the journal would decrease, rather than the quality of the papers increase (to match the quality/prestige of the journal).

And yet, because the prestige of the publication venue is all too often taken as a proxy for the quality of what is published in that venue, too often people become susceptible to what I have entitled in this post the most pernicious fallacy: The way to demonstrate the quality of your research is not by writing high-quality papers, but to publish them in prestigious journals. Because if your paper was in a prestigious journal, it must be a good paper, right? We have gone from journals deriving their prestige from the quality of the papers in them, to papers deriving their quality from the prestige of the journal that published them.

Why is this problematic? Because it treats publication as an end in itself rather than a means to an end, that end being the dissemination of research. When I am doing research myself, in preparation for writing an article, where do I go to find relevant papers? Not to the high-prestige journals, in general; no, I go to the journals that specialise in the area that I am working in, because these are the papers that are going to be relevant to what I want to do. Similarly, when I publish, I want to publish in venues that tend to publish other papers on the same sort of topics -- because this increases the chances that the people I would like to have read my paper will actually do so.

As a result of this, I have tended to publish papers in specialist journals in my field -- journals which, quite rightly, have a high prestige in their respective fields because of the quality of the specialist papers they publish, places like the Journal of Philosophical Logic for logic and Vivarium for medieval philosophy. But because these are specialist journals, rather than generalist journals, if these venues are taken as proxy for the quality of my papers that I've published in them, they get ranked lower than the generalist journals -- because if my paper were truly high-quality research, then of course a generalist journal would want to publish it. Now, that may be true: Maybe Philosophy and Mind and Philosophical Quarterly and the like are equally likely to publish philosophical logic or medieval philosophy as another subarea of philosophy (I have my doubts about this, but I'm happy to suppose that it's true for the purposes of example here). But is it equally likely that people interested in philosophical logic or medieval philosophy will go first to those journals to find the new, relevant research for their own projects? No.

And that's what I meant about publication becoming an end in itself rather than a means to an end. If the publication venue prestige determines the quality of the paper, for the purposes of rankings/evaluations such as REF, promotion, grant applications, job applications, etc., then getting a paper published in, e.g., Mind is an end in itself, even if the paper then dies a lonely, unread death because no one would ever think to look there to find a paper on that topic. If it is the journal that endows high quality upon the papers it publishes, rather than the papers published in a journal endowing high quality upon that journal, then whether anyone reads or uses the research becomes irrelevant: the actual research, and its actual quality, becomes irrelevant.

When we tell junior colleagues, ones applying for jobs, or applying for promotions, or trying to put together a good REF package, that they should be submitting their work to the high prestige journals, because those are the ones that will "count", then we are falling prey to this pernicious fallacy. We should never forget that the quality of a research paper lies in the paper itself, not in the venue that publishes it.

No comments:

Post a Comment