Strange IndiaStrange India


The Hungarian Art Network at the BarabásiLab: Hidden Patterns exhibition

Art by Albert-László Barabási’s laboratory, on display at the Ludwig Museum in Budapest.Credit: Dániel Végel/Ludwig Museum Archives

The Science of Science Dashun Wang & Albert‑László Barabási Cambridge Univ. Press (2021)

Research institutions are under increasing pressure to make decisions faster, with fewer resources. The science of science can provide information on how to organize research effectively to meet societal needs.

The field uses quantitative tools to understand the discovery system. It complements venerable disciplines such as the history, philosophy and sociology of science, and relies on century-old bibliometric techniques that exploit the traces left by publications, grants and patents. Findings can illuminate trends, reveal disparities and inform policies for hiring, funding, training and more.

In their book The Science of Science, computational social scientist Dashun Wang and network scientist Albert-László Barabási present an introduction to a burgeoning part of this activity. They frame it as a big-data approach, but it is perhaps better understood as applying the tools of network science to study science. Their primer fields interesting anecdotes, engaging call-out boxes and an accessible style. But its narrow view leads to worrying interpretations.

They describe the science of science as emerging, without engaging with its historical or interdisciplinary foundations. In fact, the titular term was used in the 1963 book Little Science, Big Science, in which science historian Derek de Solla Price advocated that the community “turn the tools of science on science itself” — and has been used in major scientometric publications since the 1970s.

In the style of a management handbook, Wang and Barabási promise to help scientists to navigate their careers, arguing that the science of science aims to maximize individuals’ odds of success. They suggest that their insights will help administrators to spot the people who will bring the greatest benefit to a department, and they encourage funding agencies to identify those most likely to be high performing.

But the research community has moved from promoting indicators such as the journal impact factor and h-index to critiquing them. These measures often do more harm than good, creating what economists Margit Osterloh and Bruno Frey call a “taste for rankings”, rather than a “taste for science”. They lead scholars to salami-slice — publish data in increments to glean as many papers as possible — or worse, to compete.

These concerns have been promoted through consensus statements such as the Leiden Manifesto and the Declaration on Research Assessment, which has been signed by thousands of institutions and more than 17,000 individuals. The documents call on the community to end reliance on poorly constructed indicators that can reify structural biases such as racism, sexism and classism. Policymakers are implored to remember Goodhart’s law: when a measure becomes a target, it ceases to be a good measure.

Not only do Wang and Barabási ignore this conversation — they seem to advocate the gamesmanship that the community has committed to dismantle.

Matthew and Matilda

Table of Contents

They open with a discussion of scientific careers, listing dozens of people. They name only a handful of women in the entire book: half in a paragraph about the English department at Duke University in Durham, North Carolina, and then brief references to Marie Curie, Cleopatra and the sociologist Dorothy Swaine Thomas.

Sociologist Harriet Zuckerman is the sole woman acknowledged for her scientific work, in the section on collaboration. Her contributions to the concept of the Matthew effect — which describes the disproportionate rewards reaped by those in privileged positions — are neglected in favour of discussing her husband’s research on the subject. Also unmentioned is the Matilda effect. Coined by historian of science Margaret Rossiter, this term describes “the systematic undervaluing of women’s contributions to science”. It is named after suffragist Matilda Gage, who described the phenomenon in 1870.

The invisibility of women and people from other minoritized groups is not simply a matter of equity — it challenges the bases of the arguments in the book. Decades of empirical evidence from sociology and scientometrics show the strong influence of social and demographic factors on scientific performance. To ignore this is to enjoin administrators, funders and hiring committees to look to past success as a chief indicator of future success, without considering systemic barriers.

Things improve when Wang and Barabási tackle the optimization of research teams. They contend that large teams develop science, whereas small ones disrupt it. They emphasize the productivity of “super-tie collaborators” — scholars who continuously co-author papers across time, which they suggest is a mechanism for success. They provide evidence that the best teams draw from a variety of ethnicities, institutions and nations — reinforcing (largely without citing) work from sociology and scientometrics.

They imply that research on scientific collaboration began in 2000. Yet Zuckerman’s 1964 dissertation examined collaboration among Nobel laureates in the United States, and science historians Donald Beaver and Richard Rosen developed a comprehensive theory of collaboration in the first issue of Scientometrics in 1978. The empirical analyses that Wang and Barabási cite are drawn from between 2000 and 2005, before the rise of China as a scientific superpower, leading to anachronistic moments.

Lifetime impact

The authors introduce several concepts of their own, including “ultimate impact”. They argue that the lifetime citations of a paper are a factor of perceptions of novelty and importance (fitness), how fast a work begins to be cited (immediacy) and for how long it is cited (longevity). They present a formula to predict the total number of citations a paper will acquire. They admit that this can lead to “the premature abortion of valuable ideas”.

They then advocate the Q-factor, which seeks to define and predict scientific careers by quantifying an individual’s ability to turn an idea into a discovery with a given citation impact. This rests on the assumption that all scientists have access to the same resources, ignoring the massive disparities across countries and institutions. Wang and Barabási imply that highly productive scientists possess an inherent talent or ability, yet they assert that randomness is a key variable in “hot streaks” of output. Actionable and equitable science policy is unlikely to be built on ideas of either innate brilliance or unpredictability.

Promisingly, they close with a research agenda for investigating failures, acknowledging that focusing on success overlooks this crucial aspect of research. Where do research functions such as synthesizing, replicating or curating sit in this binary classification, I wonder? Normal science, by definition, is the accumulation of findings from a broad labour force. The most productive and highly cited researchers stand on many shoulders. If the workforce is classified as either superstars or failures, cumulative scholarship is lost.

Science does not happen in a vacuum. It is a social and intellectual institution, rooted in historical, economic and political contexts. Underplaying these elements has grave consequences. Ultimately, Wang and Barabási deliver a dispatch from an era that assumed that science was a meritocracy – despite ample evidence to the contrary.



Source link

By AUTHOR

Leave a Reply

Your email address will not be published. Required fields are marked *