Wednesday 28

«Science was never intended to be in the market, but today it’s a commodity» - interview with Andrea Saltelli

Posted by Social Observatory of "la Caixa" on 28 Jun 2017

«Science was never intended to be in the market, but today it’s a commodity»

Interview with Andrea Saltelli - originally published at Social Observatory "la Caixa"

Andrea Saltelli (Italy, 1953) is adjunct professor at the Centre for the Study of the Sciences and the Humanities at the University of Bergen (Norway) and guest researcher at the Institute of Environmental Science and Technology at the Universitat Autònoma de Barcelona. Together with philosopher Silvio Funtowicz he has recently written a series of pieces on the post-truth debate.   

---------------------------

Everybody is talking about a crisis in science... What’s it about?

First of all, there is a crisis in replicability which is especially evident in the medical field, replicability meaning that a study should produce the same results if repeated exactly. Many articles have been written by people who attempted to replicate experiments and were disappointed to find how many of them failed. For instance, John Ioannidis and others have tried to replicate preclinical and clinical experiments.

 

What are the causes of this crisis?

This discussion can be thrown open very wide because there is a chain of causes. The main one is that science was never thought or designed to be in the market. But today science is a commodity: it is in the market, and it’s sold at a price. Historian Philip Mirowski has detailed this process in a book called Science-Mart. Privatizing American Science. It’s a play on words to express that when science becomes a supermarket, when it becomes too much of a commodity and it’s sold over the counter, the result is that its quality disappears.

 

Is this happening in all disciplines?

It affects all fields; it is also notable in psychology. Nobel Prize laureate Daniel Kahneman, who wrote the book Thinking, Fast and Slow, was the first person to realize that something was going really wrong because experiments could not be replicated. Auguste Comte, a mid-19th century philosopher, thought that sciences follow a hierarchy, according to how close they are to exact laws. So at the top you have mathematics, geometry, and then you have physics, chemistry, biology and the social sciences. The more you move away from the top, from exact laws, the closer you get to domains where things become messier, more complex. Nearly two centuries after Comte, Daniele Fanelli looked at reproducibility rates across disciplines. He found that the lower you travel down the hierarchy of sciences, the greater the increase in positive results, which confirmed his hypothesis that 'softer' disciplines are more prone to bias.

 

In this sense, where are the limits of science?

Science cannot solve every problem. Reductionism is the idea that you can take a complex system, cut it down into bits, and if you study all the bits, then you understand the complex system. But there are systems which cannot be treated in this way, for example living systems. Whenever you want to study a biological system, you have to somehow delimit it. But how do you delimit it? In organisms, everything is linked to everything else.

I know this is very controversial, but with climate this happens over and over. Climate is too complex to be predicted with any confidence by mathematical models. When a system has so many possible co-causes, effects may cancel out or be hidden by natural variability. Back in the 1960s, someone called this trans-science, to indicate those processes that can be studied scientifically but no solution can be found to the problems they create. We are unable to tackle some problems because of their dimensions. Science needs to learn humility and be prepared to admit it when a solution cannot be given.

 

Would consortiums of different centres and countries be a possible solution to tackle such a big problem?

Well, for example, the Human Genome Project was successful, but the idea that from human genome mapping we can infer relationships between genes and diseases has been much harder to prove. And this is exactly one of those cases in which the system has behaved as a complex system with emerging properties: you don’t detect those properties by cutting it into pieces and identifying a limited number of genes. For this reason, I believe there has been a considerable disappointment in the field of start-up companies trying to make a business out of using gene mapping. I am not saying it shouldn’t be tried, but we should beware of falling into this trap of reductionism.

 

Is post-truth reaching science too?

This post-truth story is very disingenuous. Now we have post-truth: why? Because we had truth before? I would strongly doubt this. Science was born in the 17th century as a combination of discovering nature purely for the pleasure of discovering nature and of dominating that very same nature. Both aspects contributed to science becoming the foundations of the modern state. But when modern markets developed, science increasingly became an instrument of domination, profit and growth, as well as the source of all kinds of wonderful things that we enjoy. This is no longer science for the sake of learning.

Yet these days, what is science for? Is it for the common good or for the profit of the few? So there is a collapse in the legitimacy arrangement between science and democracy on the one hand and science’s own governance on the other. This is the result of science being more and more in the market, even being a market itself. So for me there are two processes behind post-truth: the loss of legitimacy of science and knowledge as pillars of the modern state; and the collapse of the quality of science itself.

 

It seems that trust and confidence in experts is eroding... Why is this happening?

A classic example of this is the sugar story. It is explosive and I am surprised at how it passed by virtually unnoticed. People are progressively losing faith in science but I was expecting a much stronger reaction because the story is huge. Last year the journal of the American Medical Association published a report bringing to everyone’s knowledge the fact that the sugar industry has funded research that would focus attention on fat, in order to take the spotlight away from sugar. Can you imagine the consequences in terms of health effects this may have had? What if we calculate how many years of life have been lost e.g. to diabetes because of this gigantic act of corruption of research integrity?

 

What is the role of scientists, as individuals, in this crisis?

It’s relatively easy to get it wrong and think that you have discovered something. The physicist Richard Feynman said that “you are the easiest person to fool” because when you are looking for something, everything looks like that something. This is called confirmation bias. It means every scientist has a greater tendency to believe those results which he/she thought would be true in the first place.

 

Can biases like this be fixed?

You have to be tenacious, but also obsessed with the quality and accuracy of what you do. Thinker Jerome Ravetz understood scientists and their communities of practice very well. Everything that you do in the lab has many elements that you can’t find in the handbook, they have to be communicated from person to person. This is the unspoken element of a craft. Everything is personal in science, in these communities. But nowadays these communities have mostly disappeared. Science has become impersonal.

I can publish a paper and I don’t care if I am wrong, because after all, people know me through my impact factor. The higher my impact factor, the more brilliant I am, and so I am interested in publishing many papers even if they are wrong. And so, there are errors that can remain in the system for years and that nobody will ever find out.

 

Are people working on the edges of science more likely to be misled?

People who are really on the cutting edge of science are small communities and often less likely to make mistakes. Science can be spectacularly successful there. I am thinking, for example, of high-energy physics, or of the discovery of gravitational waves. It’s a triumph; it’s really something huge thanks to the tenacious effort of physicists.

 

In your opinion, what are the potential solutions to this crisis?

There are many good people doing very good work and they are trying to change the system from within. Munafò, Ioannidis, and other authors recently published A manifesto for reproducible science. We should really stop using things such as impact and citation factors, numbers which purport to describe the importance of journals and researchers. Also the peer review system has become very dysfunctional. There are recommendations for changing the situation, and I am all in favour of these approaches. We need something very powerful because I don’t think the system can heal itself.

 

Is there any collective effort attempting to solve these problems?

There has been an important declaration against the use of the impact factor to award grants. If you look at the San Francisco Declaration on Research Assessment (DORA Declaration) of 2012, it was a very important document and a very well designed set of recommendations for metrics, but nobody applies even part of them. If I want to earn a European Research Council grant they will look at my impact factor. And this brings us to the paradox that people of my age – who should be saying I don’t care about my citation numbers – are instead very careful about them.

Even when your ideological conviction is that these things are bad, you still use them, as do the research institutions that award grants. Why do they look at the impact and citation factors? Because the only alternative is to read the candidates’ papers, and this takes time. Students in many countries are requested to have three papers accepted in peer-reviewed journals to secure their PhD, and even here the quality check on the candidates is outsourced to the journals, instead of being performed in the faculty. Thus with all these driving forces, what happens is that there are 2 million papers published every year; a huge paper-generating machine.

 

Are you pessimistic in this aspect?

I am more pessimistic than optimistic, yes. In spite of all these declarations and manifestos just mentioned, it would be very difficult to fight the perverse incentives which head in the direction of cheating. If you love science you have to defend science and this implies being critical. But many people prefer to hide the problem because, they say, if you attack science you will jeopardize funding for science. But I wouldn’t be against jeopardizing funding for science if the science involved is not properly conducted. Why should we pay for bad science?

 

Taking into account the context you depict, can we identify initiatives that have led to an improvement at system level? Or any good practices that are already working to improve the current situation?

We can register – ironically, not as a suggestion for the west - that in China, prison terms or even capital punishment can be used against those who submit fake data in drug trials. 

Ironies aside, initiatives such as Brian Nosek’s Reproducibility Project in psychology, John Ioannidis’ Meta-research Innovation Centre at Stanford, and Ben Goldacre’s alltrials.net are all worthy undertakings which are not only setting about changing things, but also fostering a new climate of reform. Add to this Retraction Watch, which does an invaluable job in keeping up the pressure on journals and their editors. I am also convinced that the direct involvement of scientists on the side of citizens in societal and environmental problems is a precious contribution helping to generate trust between science and society. The case of water pollution in Flint, Michigan, and the role of scientist Mark Edwards, comes to mind here. Back in the seventies, a group of scientist-activists called the British Society for Social Responsibility in Science sought to change the world by first changing science itself: something of this sort is perhaps needed now.

 

Interview by Núria Jar for Social Observatory "la Caixa"


Leave a comment

Comments on the blog are moderated before they appear on the site. Comment Policy

0 comments

Follow us: