Saturday, February 28, 2009

Methods of administration affect content of science

Administration of universities and other centers of scientific research becomes more rational. In the good old days, brilliant thinkers convinced ministers to help them to, often poorly funded, chairs at universities. Some where geniuses and advanced science, most taught their field and many produced nothing.

Rational administration makes decision following procedures which are previously fixed and relies as much as possible on objective measures to achieve objective decisions which can be audited. The new rational administration promises to eliminate at least the worst cases of incapable and unproductive employees of universities.

Unfortunately, the rational administrative paradigm has effects on how science progresses. The candidates for academic positions and promotions – i.e. the whole academic research community – organizes their work around these countable measures of achievement; I was astonished that within a decade, whole fields or national university systems refocused from networks based on friendship and personal loyalty to counting publications.

The merit of a researcher depends on his contribution to the advancement of science. The difficulty is in the observation of this contribution. It progressed in three steps: publications, reviewed publications and citations.

  • Firstly, counting publications is a poor measure, but had the effect to move university departments which did hardly ever publish anything in the 1980's to producing many reports.
  • Secondly, filtering publications by outlet and counting only articles in reviewed journals, increased the number of journals with a review process (again in an amazingly quick response to the incentive structure).
  • Thirdly, not counting publications but the perception of the contribution by other researchers in form of citation is the current state.

These measures can be refined by judging journals by their impact factor and calculate elaborate induces. A very useful tool is found on http://www.harzing.com/pop.htm; but the same author analyzed the possibilities to manipulate indices and warned against blindly believing them (Adler, N.; Harzing, A.W.K. (2009) When Knowledge Wins: Transcending the sense and nonsense of academic rankings, The Academy of Management Learning & Education, vol. 8, no. 1. Available online...) Wolfgang Wagner has in an email pointed out that blindly believing in academic indices is similar to blindly believe in rating agencies – the effects are slower in coming but perhaps similar.

This is mostly known and nevertheless it is amazing (1) how quickly the whole academic social system has been transformed and (2) how blindly administration believes in such indices to manage the university – or rather, not to manage, but to administrate the university (see http://werner-kuhn.blogspot.com/).

The system works around the writing, publishing, reading and citing publications. How?

I have learned from examples and critical reviews how to write papers; I use the same rules to review other papers and to decide on publication of papers in reviewed journals and conferences. I see among researchers, reviewers, editors and conference chairs a strong agreement, what constitutes a good paper. These 'good papers' constitute our advancement of science. It is not that 'advancement of science' constitutes a publishable paper.

Evidence can be collected from typical instructions to reviewers:

  • Papers must link to previous work on the same subject; references are crucial (especially references to work of reviewers!). Lack of 'the' pertinent citation is often sufficient to disqualify a manuscript. As a result, papers are long on review of previous work (less so in mathematics, more in geography). A boring waste of time of the writer, reviewer, editor and, perhaps, the eventual reader.
  • Paper should make a novel contribution to science, but surprisingly, reviewers of many journals read this as 'unpublished', not what is the new idea which helps others in the next step of research. A not yet published application of (known method X) to application Y appears novel. Good journals are stricter and authors send in manuscripts which improve known method X by a small amount to achieve X + epsilon. Such manuscripts fare well with reviewers: the subject is known and easy to understand for the reviewer (typically having published X or another improvement to X before), the advancement (however small) is identifiable.
  • Papers must be short – page limits are often low and attention span of readers is short. Established paradigms cannot be attacked in cases where multiple reinforcing believes must be questioned at once.

The optimal paper today is picking up a current well limited topic with sufficient previous work and improves it minimally. This is the paper that goes quickly trough journal review and gets, on average, sufficient points to make it into the programs of even strictly reviewed conferences. I am bored by reading this infinite succession of papers repeating what is already known and improve it minimally (if at all).

Such papers count, are cited and cite previous papers (and often not the original first publication); they make our index based, rational university administration happy.

What papers are not published?

  1. Novel ideas – because there is not enough previous work and reviewers are not familiar with the topic.

  2. Critical papers – because some reviewers will not like the critique (on their work) and react negatively; editors go typically with the negative vote.

  3. Papers introducing new points of view – because reviewers will claim, that this has been known before and editors will not force them to substantiate the claim.

What papers I would like to read?

Papers with controversial ideas that can be discussed – when did you see last a discussion in a scientific journal? Substantive papers with widely reviews varying between very good and very poor could advance science more than just another tame epsilon improvement.