Showing posts with label universities. Show all posts
Showing posts with label universities. Show all posts

November 20, 2010

Danish Law Would Discourage Future Nobel Prize Winners From Seeking Work

After the American Century


The Danish government has proposed rules for admission to the country that would discriminate against the vast majority of the world's PhDs. Notably, the new rules would favor only two of those who received the Nobel Prize in 2010. The restrictive regulations that the right-wing government has proposed would give bonus points to anyone with a degree from one of the world’s top twenty universities, as determined by the London Times annual poll. Restricting the list to just the top 20 schools is a serious mistake. It should include at least the first 200 schools, especially since none of the Danish universities are anywhere near the top twenty. The rather nasty implication is that foreigners  (or the Danes themselves!) with Danish PhDs are not really good enough.

In the London Times, DTU is ranked 122, Aarhus 167, and Copenhagen 177. As a group the Danish universities have fallen in the rankings considerably in recent years.

The danger of excluding Nobel Prize winners is by no means a hypothetical exercise. A few years ago, one of this year's winners, Konstatin Novselov was offered a position at the University Copenhagen, but his admission to the country became so snarled in red tape that he went to get his Ph.D. in Holland, at the University of Nijmegen. Just how many top quality doctoral students and faculty are lost in this way? Some never apply in the first place, because Denmark has become known as a nation whose government creates problems for non-citizens.

The list below includes the universities that the 2010 Nobel Prize winners either attended or now teach in.  I have put in parenthesis each school’s position in the London Times world ranking. Note that seven of the universities associated with this year’s winners are not even in the top 200 universities, much less the top 20.

Carnegie Mellon University (20)
Edinburgh University (40)
Essex University (not in the first 200)
Hokkaido University (not in the first 200)
Jilin University (China) (not in the first 200)
London School of Economics (86)
Madrid University (not in the first 200)
Manchester University (87)
MIT (3)
Nijmegen University (not in the first 200)
Northwestern University (25)
Peking Normal University (not in the first 200)
Purdue University (106)
Russian Academy of Sciences, Chernogolovka  (not in the first 200)
University of Delaware (159)
University of Tokyo (26)
University of Wales (not in the first 200)


The world’s top 20 Universities according to the London Times
1            Harvard University        USA
2            California Institute of Technology           USA
3            Massachusetts Institute of Technology    USA
4            Stanford University            USA
5            Princeton University            USA
6            University of Cambridge       United Kingdom           
6            University of Oxford             United Kingdom           
8            University of California Berkeley   USA           
9            Imperial College London  United Kingdom           
10          Yale University   USA           
11          University of California Los Angeles    USA
12          University of Chicago       USA           
13          Johns Hopkins University    USA
14          Cornell University            USA
15          Swiss Federal Institute of Technology Zurich Switzerland           
15          University of Michigan            USA           
17          University of Toronto            Canada           
18          Columbia University            USA           
19          University of Pennsylvania            USA
20          Carnegie Mellon University, USA

See also World University Rankings, 2011- 2012, elsewhere on this blog (October, 2011)

March 21, 2009

The Bureaucratic Dream of Quantifying Research Results

After the American Century

I can see the attraction for bureaucrats and politicians of giving a numerical score to every book and article that every academic produces. If one could find a way to do this accurately, then individuals, departments, universities, and whole nations could be ranked, and money handed out to the most productive. It seems so logical and easy. Of course, university researchers will resist, but the effort surely would be worth it.

This fantasy has been pursued in different nations, and for the last year has been a key project of the Danish Ministry of Research. As it happens, I was dragooned (not asked) to serve as one of 300 experts charged with drawing up the lists of all scholarly journals and academic publishers, and then dividing them into groups based on quality. More points would be given to work published in the "best" journals. The Ministry considered this task to be so easy that it provided no release time or extra funding for it, and the work was to be done in just a few months. Each sub-committee would send in its lists and the Ministry would combine them into a complete overview.

This reminds me of a story I heard about the Spanish king (centuries ago) deciding to produce a map of his empire by asking each region to prepare a map of itself, the idea being to combine them all into a map of the realm. Each governor had a map drawn, but of course the scale employed and the methods of representation were by no means the same. The King tried to put the pieces together, but instead of a map he had a misshapen patch-work quilt of no value.

Yet making a map of Spain is easy compared to making a map of academic knowledge production. Land, surveyed according to a single system, can be mapped pretty accurately - even if it is not as easy as it might appear, for one one must take account of the curvature of the earth and of slight deviations in measurement due to equipment that reacts to changing temperatures, etc.

But a numerical system to measure knowledge production? Here are some of the problems. First, some fields are intensive, others extensive. In philosophy, for example, the closely reasoned article is the central form of publication, and even a very fine philosopher may not produce so very many in a decade. In my field of history, articles are more frequent, which makes a certain sense, since its subject matter is extremely extensive, with every nation, organization, and institution providing ample areas for study.

Second, in some fields, books are the most important units, in others, articles. Scientists mostly write articles, often of less than 10 pages. For historians, the most significant unit of production is the book. The typical academic book is 250-400 pages, more than ten times the length of the typical history article. How does one compare the two forms? Some university departments in the United States establish "conversion tables" ranging from five articles equals a book to as many as eight articles equals a book. There is no consensus.

The subcommittee of five persons on which I served developed a list of more than 700 English language journals from Britain, Ireland, the United States, Canada, Australia, and New Zealand. The same committee was also responsible for the Spanish and French journals, most of which I cannot offer a qualified opinion about. Imagine that we used only ten minutes to consider the ranking of each of the 700 English language journals. That is far too little time, yet that would take 7000 minutes, or 116 hours. The problem is that even a committee of five will not know all 700 journals.

Nor was this all. We also had to compile a list of academic publishers, a formidable task in itself, for many universities in the English speaking have presses, including some of the most prestigious. We were provided a list to start with, but it was rather useless, as it omitted many of the finest publishers and was not drawn up according to any principles that I could discern. We were told it was a Norwegian list, but I think the Norwegians are far more clever.

Well, we did our best, as did the other sub-committees, but our map of academic knowledge production could not possibly become coherent. And to make matters worse, unidentified persons in the Ministry (none of them with even a Ph.D. so far as I can tell) tried to adjust the rankings without consulting the specialists involved. They made the mess worse and called their own intelligence into question. Example. A physics article published in Science is considered by any university a great achievement. Unfortunately, the Danish Ministry of Research did not know this and assigned Science a low ranking. That should have been a no-brainer. Readers in Denmark will know that this fiasco became part of an on-going news story about the attempt to create a what is called (in rough translation) a "bibliometric measurement system."

For the record, let me say that from day one I felt this was a misguided enterprise, whose real purpose was to take decision-making about quality out of the hands of professors and give it to bean-counters in the Ministry. Furthermore, such experiments in other nations, notably the UK, have shown that it does not foster world class research. Rather, it encourages a calculated response to whatever point system is established. For example, suddenly several short articles are better than one long one, several articles accepted by mediocre journals are "worth more" than one really great article that took years to write and place in a top journal. A book that can be researched quickly is worthwhile, but scholars are, in effect, punished for attempting anything that takes more than a few years. Textbooks are not worth any points, so no one wants to write them. Book reviews are also worth little or nothing, so this essential and very public part of the peer review system is weakened.

Worst of all, academics may possibly come to believe that every article published in a "top" journal is automatically better than one appearing in a "lesser" journal. In fact, innovative work often finds a home in new journals or new publication series, created by upstarts or dissenters. Judging and rewarding academic research based on a point system reifies the present hierarchy and punishes innovators. The goal may be stimulating research, but the result can be ossification.

It may seem astonishing to bureaucrats, but the best judges of what is great research are the specialists themselves - the peers in peer review. Why judge the content of an idea by the venue where it appears? Why suppose that quantity can make up for quality? Why imagine that knowledge is quantifiable in the first place?