May 4, 2024

The mountain of shit theory

Uriel Fanelli's blog in English

Fediverse

Dark Academy

Writing an article like this without ending up accused of being anti-vax and anti-everything is difficult, because anyone who points out a problem in terms of science, or in the way it is "generated" today is an outcast. So, I will continue to write this article based on two facts.

The Technical University of Zurich, one of the most important in the world (among others, Einstein studied there, for example), has decided to no longer make the data from its scientific production usable by the companies that draw up the annual rankings of the main university. That is, he prohibited data on bibliographic production from being used to calculate how authoritative their university was. It's a courageous move, coming from a university that doesn't need any other authority.

The second fact is that the European Research Council (ERC), the European agency for basic research, has urged its selectors not to consider the traditional evaluation indices of publications, in particular the Impact Factor (which quantifies the scientific impact of a journal) and the H-Index (on the scientific impact of an author).

To understand what the problem is, I can make a premise. There are two types of “research”: public and private. The private one, the simplest to analyze, is the one that made the vaccines against Covid-19, the one that gave us increasingly faster processors, and almost all the improvements in the technologies we use.

How does private research determine who is good and who is not? Well, it uses a mainly economic criterion, and mainly economic tools. The number of patents, their yield and their market value, and obviously the sales linked to new "discoveries".

And from this it decides the funding to be allocated to people and groups. Private research has some methods in common, such as the experimental method, but does not have ALL methods in common, such as peer review. If someone built a photon microprocessor, he certainly wouldn't wait for competitors to validate his discovery: if the object works, it is patented and put on the market. From that point it works, but only because it's in everyone's pocket. And with the proceeds, even more research would be financed.

If I had to give an example, I would give that of the blue light LED, which was the result of research conducted privately by a Japanese company. They did it, they saw that the light was blue, and then they put the blue light LED on the market and made billions of dollars from it. Peer review? And who should they have made it from, Matsushita-Kotobuki, or Sony? They were the competitors.


If, however, we move the problem of financing and validation to the world of public research, normally understood as "academia", we do not obtain such clear criteria. And we do not obtain clearer criteria because a method was chosen that we, in the twenty-first century (century of social media), consider very weak and easy to falsify.

Once upon a time, that is, when social networks did not yet exist, it would have been simple to think that peer review and counting the number of citations were a kind of granitic method. But today, accustomed to social networks, it is easy for us to grasp the analogy between positive peer review and a "like", and a negative peer review as a "dislike" (as happens on Reddit or on Youtube – which however has made invisible the dislike). Even the idea of ​​calculating how influential the magazine is and how influential the author is can be mapped to the reputation of the author, or the reputation of the channel. Think about TikTok, for example.

But the academy was born first. And before, using a method of financing universities and groups based on reputation and diffusion, that is, a typical criterion of the bibliographic world, was not yet recognized as a method that failed miserably on all social media.

When you go to Amazon to see product reviews, which are the analogue of a peer review, nowadays you immediately wonder what the agenda of the person writing the review is, and to make matters worse you immediately wonder if it isn't true. or false.

Similarly, you can try to estimate the rating of a seller, which is like trying to estimate the channel, or the magazine that publishes a certain article, but we know well how many techniques exist to falsify this too.

in a historical sense, the fact that the classic methods for the validation and financing of academic groups are being questioned coincides with a new awareness: we know these methods because they are also used in social networks and on online markets, and we know very well that they are easy to screw.

Of course, the old, parchment professor who turns on cell phones with a Bunsen burner (and then complains that they don't work well) will easily accept the traditional route. But when, little by little, young professors make their careers, reaching the top levels, the awareness comes that those methods are really weak .

If someone proposed to us a rating system for a hotel room based on the fact that ONE foreign person had stayed there and liked it (i.e., a single positive peer review), we would turn up our noses. We want AT LEAST dozens if not hundreds of reviews. And if the scientist replies to me that the colleague's peer review contains data, I can point out that the whole web is full of positive or negative reviews full of data (videos or photographs). But they don't convince us very well.

Of course, reviews on the internet are not useless: if I see a product that has hundreds or thousands of feedback, and they are almost all very positive, while there are few negative ones and they are linked to bad luck that happens or to personal tastes, then I know that the product is most likely credible.

But I also know that, by hiring specific agencies, you can buy both reviews and ratings, and therefore I don't have, so to speak, a "scientific" certainty.


In fact, when you explain to a young person how a paper is validated, they generally roll their eyes. In practice, they answer, there is a kind of social network of scientists, who like or dislike each other, there are scientists with a great reputation – just as there are influencers – and there are important magazines, just as there are important channels, or important chats on Discord.

We must recognize that the analogy is there.

And therefore I can imagine the analogue of all the reputation management and adversising methods that I find on social media. People who, if they give a positive review, take home the product for free, just like people who, if they attend a conference and cite the organizer, will see the organizer who then cites it, or others at the same conference who cite it.

To understand how the problem materializes in practice – sexism aside, or maybe not – I recommend listening to this video.


Everything revolves, obviously, on the continuous production of "papers", which as you heard Sabine say, are money, and are used both in the ranking of universities, and in the ranking of researchers and groups, and to draw up the "rankings of best universities”, and to allocate funds to research groups.

I don't know how this story will end: as awareness spreads of how weak and unreliable the criterion followed so far is, sooner or later people will begin to choose for themselves the manufacturers from which they buy science, in the form of technology. .

There is no doubt, that is, that Apple knows what it is doing when it builds a cell phone. But would we buy one built by Atlanta University?

Uriel Fanelli


The blog is visible from Fediverso by following:

@ uriel @keinpfusch.net

Contacts:

Leave a Reply

Your email address will not be published. Required fields are marked *