In this blog, Geoffrey Boulton and Moumita Koley argue that establishing fair and transparent. Standards in science publishing is vital for maintaining. The integrity and credibility of scientific research, which significantly impacts global society.

Divergence of vision and reality

The vision of the International Science Council is of “science as a global public good.” Which implies that the results of scientific inquiry should be freely .Available to all who might wish to scrutinize or use them. Sufficient resource is currently available from public funders. To make this a reality (EUA Big Deal Survey Report, 2018); but the reality is otherwise. There are two reasons for this. Firstly, although many scientific journals and papers maintain high standards. Too many lack proper editorial oversight, many lack rigour and integrity. Some engage in fraudulent practices.

Reasons for excessive pricing

Two processes drive up prices. Firstly, most authors do not pay for publication (which is mostly born by science funders). A “moral hazard” in economics terms which avoids the normal customer control of prices. Secondly, science publishing has evolved from a state, half a century ago. When getting into print was the major obstacle, to a current state. When almost any article can find a publisher. The major current challenge is to be read. So-called “high impact journals” offer such access. But at a high price. To rely on such a process when sorting algorithms could readily generate source-agnostic.Lists of relevant papers and agreed minimum standards could exert quality control reflects a dramatic. Lack of system governance from whatsapp data the scientific. Community and a silent acceptance of the actions of commercial publishers.

WhatsApp Data

Impacts of assessment and ranking

There are two key drivers of individual 8 must-have wix plugins to optimize your blog and institutional behaviour that incentivize high price and lack of accountability for standards. Firstly, the value placed on bibliometric indices in evaluating performance and determining career advancement for researchers incentivizes a “publish or perish” culture that creates “overpublishing”. Secondly, the cumulative total and disciplinary distribution of bibliometric indices has come to be of importance to universities as institutions line data in creating university rankings. These utilize bibliometrics and other indices to generate ordinal lists of university excellence and have persuaded many governments to target funding with the express purpose of enhancing the ranking of selected universities.

A key part of such

Processes is to incentivize publication by academics to increase a university’s total bibliometric score. It has often been pointed out that these processes are statistically deeply flawed (Boulton, 2010; O’Neill, 2012). In order to compile a ranking, it is necessary to make so many arbitrary choices between equally plausible alternatives that the result becomes meaningless (Brink, 2023). Errors cannot be estimated, with the consequence that we do not know if rank 50 is different from rank 100. Apart from its methodological errors, ranking purports to capture something that there is no reason to believe exists, a one-dimensional ordering in terms of quality of all the universities in the world.

It is extraordinary

That universities have been prepared to accept the judgement of commercial bodies about what constitutes a “good university” and that they have adapted to what these same organizations claim to be key indicators. This extraordinary choice has narrowed the perspectives of universities so that they converge towards a single commercially-defined model, rather than exploiting the diversity that different cultural, social and economic settings need and deserve. It contributes to many perverse behaviours.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top