Skip to main content

science

Selecting the best journals for our papers

Submitted by redoxoma on Fri, 02/28/2020 - 19:58
Photo by Jacqueline Macou (https://pixabay.com/users/jackmac34-483877/) under Pixabay License

Radical-Free Corner by Alicia Kowaltowski & coauthors [1], from Instituto de Química da USP
Corresponding author e-mail: ali-I-am-here-cia@hotmail.com@iq.usp.br

Scientific publications in specialized journals have always been the cornerstone of communication between scientists, allowing for the exchange of new knowledge. However, the number of specialized scientific journals in the Biochemistry and Molecular Biology area has grown quickly within the last few years, and is 50% larger today than it was 15 years ago. Within this changing situation, choosing an adequate journal to submit to requires new considerations. With this in mind, the Department of Biochemistry, University of São Paulo, prepared a published guideline for what it considers should be the main points taken when choosing a journal [1]. The consensus agreement of the 48 authors of this position paper is summarized in the following bullet points:

“-Submit pre-prints whenever possible, avoiding their use only when a target journal does not permit it or it is unadvisable (such as for clinical studies), and then proceed with submission to a quality peer-reviewed journal.

-Value and provide high quality peer-review in scientific journals, both by delivering careful revisions, when requested, and seeking good revisions of submissions.

-Give preference to journals with a solid and time-tested reputation for quality, irrespective of current impact factor.

-Denounce and avoid unfair pricing for both subscription and open access journals.

-Value journals with good visibility, indexed widely, and that appeal to a general audience.

-Give preference to journals with strong links to academic societies and that include active and highly reputable investigators on their boards and advisory committees.”

One much discussed point in 2019 was the idea that scientific publications should be freely accessible to all readers. This concept was pushed strongly by Plan S, an ambitious proposal launched in September 2018 to make open access publications mandatory worldwide by January 2020 (now pushed back to January 2021 [2]). While the Department of Biochemistry authors believe open access publications should prevail in the future, they caution that an uncontrolled and fast push for immediate open access can strengthen two very large problems in the scientific publication landscape: predatory publications [4] and abusive pricing [1].

An alternative that provides immediate open access reading and meets the requirements established in 2019 by our main research funding agency FAPESP [3] is to deposit pre-prints of papers prior to final peer-reviewed publications. Pre-printing is generally recommended by the position paper [1], which also indicates that a final peer-reviewed publication is important for the visibility and quality boost that the revision process provides.

Another point almost universally considered when choosing journals is impact factor. The authors of the position paper caution that impact factors have many caveats, and that the general idea should be to favor a broad audience of scientist readers and not simply a highly flawed number. Instead, the quality of the editorial board, institutions backing the journal (such as reputable scientific societies) and its time-tested reputation should be more important when choosing journals.

Indeed, a collective New Year´s publication resolution [5] should be made among scientists for 2020: to consult the history of a journal and the lists of names in the editorial boards, selecting those with the highest quality scientists in the field, regardless of impact factor. This would ensure the best possible peer-review process for the paper, and therefore contribute toward the generation of high quality Science.


References

  1. M.S. Baptista, M.J.M. Alves, G.M. Arantes, H.A. Armelin, O. Augusto, R.L. Baldini, D.S. Basseres, E.J.H. Bechara, A. Bruni-Cardoso, H. Chaimovich, P. Colepicolo Neto, W. Colli, I.M. Cuccovia, A.M. Da-Silva, P. Di Mascio, S.C. Farah, C. Ferreira, F.L. Forti, R.J. Giordano, S.L. Gomes, F.J. Gueiros Filho, N.C. Hoch, C.T. Hotta, L. Labriola, C. Lameu, M.T. Machini, B. Malnic, S.R. Marana, M.H.G. Medeiros, F.C. Meotti, S. Miyamoto, C.C. Oliveira, N.C. Souza-Pinto, E.M. Reis, G.E. Ronsein, R.K. Salinas, D. Schechtman, S. Schreier, J.C. Setubal, M.C. Sogayar, G.M. Souza, W.R. Terra, D.R. Truzzi, H. Ulrich, S. Verjovski-Almeida, F.V. Winck, B. Zingales, A.J. Kowaltowski. Where do we aspire to publish? A position paper on scientific communication in biochemistry and molecular biology Brazilian Journal of Medical and Biological Research, 52(9): 2019. | doi: 10.1590/1414-431x20198935
  2. E. S. Foundation. 'Plan S' and 'cOAlition S' – Accelerating the transition to full and immediate Open Access to scientific publications [Homepage] 2020.url: https://www.coalition-s.org
  3. F. Marques. FAPESP lança política para acesso aberto Pesquisa FAPESP [on-line], 2019.url: https://revistapesquisa.fapesp.br/2019/03/14/fapesp-lanca-politica-para-acesso-aberto/
  4. A. H. P. Duncan. Predatory publishers: the journals that churn out fake science The Guardian [on-line], 2018.url: https://www.theguardian.com/technology/2018/aug/10/predatory-publishers-the-journals-who-churn-out-fake-science
  5. L. M. Gierasch. JBC’s New Year’s resolutions: Check them off! Journal of Biological Chemistry, 292(52): 21705–6, 2017. | doi: 10.1074/jbc.e117.001461

Add new comment

The Electronic Lab Notebook: Am I going have one?

Submitted by redoxoma on Thu, 02/28/2019 - 18:06
The electronic lab notebook

Main Article by Percíllia Oliveira and Patricia Nolasco

Communication is at the heart of science throughout all stages of work development. Ideas, evidences, experimental findings have to be shared and discussed among students or post-docs themselves and with their supervisors, as well among collaborating groups, funding agencies and in some cases the industry, long before they are externally communicated to the public. Paper laboratory notebooks have been at center stage of this scientific communication. Science starts and gains life in those pages. Science starts and gains life in those pages What is written there is fundamental documentation to provide a safe basis for assessing at any time the collection of observations, accompanied by crucial experimental details, which underlie the rite of passage of a hypothesis to results that are negative or positive, failed or successful, correctly or incorrectly interpreted, etc. But the essential element in the lab notebook is the data. Immutable and solid as they should be. From there, the results are assembled into scientific papers, go to meeting presentations, construct scientific contributions and in some cases serve as the basis for patents and turn into applications. It runs between the naïve and ironic to consider that vast implications ranging from novel worldwide research avenues to drug developments that cost millions — and directly reflect in the lives of many patients and the public in general — are based solely on the good faith that experiments in fact occurred as they are written in a lab notebook.

Thus, the lab notebook remains the crucial depository of raw scientific advances. However, human writing is not always easy to understand/interpret, the paper media is slow to write, its information is tedious to retrieve, it is environmentally unfriendly and very hard to store on a large scale basis for prolonged periods of time. Moreover, it is always possible to forget documenting crucial information. And the physical media has its intrinsic fragility. Moreover, as scientists deal with increasing volumes of data, such as in Systems Biology or Big Data, paper notebooks have indeed become inefficient and can be viewed as archaic. In addition, the increasing concerns over reproducibility of scientific experiments, as well as data manipulation, have promoted a significant upscaling in the documentation standards required from funding agencies and publications.

Can you easily find the previous data?Furthermore, from a practical standpoint, let's put ourselves into the skin of a young student or post-doc. The first thing that happens when you join a lab is to receive a book which you should take care as much as your own life, because all your work during graduation and ensuing years will be on those sheets. In the beginning, keeping the lab notebook is relatively effortless and it even seems simple. Until time starts to pass faster and faster and, maybe in one week or in 3 years, you need to replicate some experiment. Can you easily find the previous data? Are you sure this will be possible using just lab your notebook notes? Can you decipher all notes and drafts? And if something happens with your lab notebook? And why did you forget to write that small detail that has now become critical? In many cases you start to wish you were that kind of person who would take organized notes of everything at the very moment they happen. And then you may remember that everybody told you that — today more than ever — we must be quite focused to be organized and document everything we did in the lab using precise rules. And you might feel guilty and more than ever part of a huge party named… human beings.

Thus, further solutions to the lab notebook are a logical and needed step to improve the accuracy, accessibility and ability to reproduce raw scientific results. In a logical sequence of events, electronic lab notebooks (ELNs) have come up to fulfill this gap and are growing steadily. Historically, the idea of a paperless lab has been a promising development since the early 1990s.ELN will replace the way scientific information is kept It is expected that ELN will replace the way scientific information is kept, facilitating reproducibility, long-term storage and availability of experimental records across multiple devices (e.g. phones, computer, tablets) and also providing interfaces to instrumentations through integration with all digital data and images. In particular, ELNs may logically facilitate investigator adherence to best practices in data documentation. Other relevant aspects of ELN include easy data sharing and backups, search algorithms, enhanced transparency and a way to protect intellectual property by ensuring that records are properly dated and maintained. Indeed, ELNs can be very useful for knowledge management, by housing all raw research-related files (notes, data, results, graphs, images, etc) in a unique local; this provide an easy tool to browse and search through these experiments as a whole, up to years later.

Currently there is a wide range of ELNs in the market (ca.72 versions available among free and paid to use), covering different areas within Chemistry and Biology within most active knowledge domains. Furthermore, there are generic note-taking products, which have been evaluated for use as ELNs such as Evernote and Onenote. In addition, several investigators have been using cloud storage such Dropbox and GoogleDrive as well, which are reasonable options depending on the type of laboratory and data to be stored or exchanged. The best-rated ELNs available, according to a systematic search by a Life Sciences news website (Splice [7]) include SciNote, Benchling, RSpace, Docollab, LabFolder, LabArchives, Mbook Labguru and Hivebench. Also, many universities and research institutes have started to provide ELNs to their investigators, for example VIB Institute in Belgium [8]. Each lab has its own set of expectations, intentions, needs and capabilities, which will most likely never be fulfilled by a single universal ELN but may be satisfactorily addressed by one of these possibilities.

Despite the large ELN portfolio and their advantages, a number of scientific labs still prefer paper over digital technology and the majority of them are likely still handwriting on paper notebooks and continue to paste tables and gel figures on them. In some other cases, research labs are still struggling with mixed success to introduce ELNs or digital data management into their teams. These difficulties can be accounted for by a number of reasons, including: resistance or fear to give up the security blanket of paper lab notebooks; inertia to change established practices (i.e. documentation, storage data), especially in the middle of an ongoing research project; lack of information about how ELNs work; extra time needed to start learning to use ELNs; lack of encouragement and support from supervisors to the use of ELNs by their students; finally, the feeling that all is well, why change it? — even if all is not well, as we will see in the next paragraph. These points highlight the main issues about widespread adoption of ELNs. However, each lab has its own issues and challenges to introduce ELNs in the routine. In this way, there is no best preferred approach to get started. In general, however, it would be great to start addressing this issue and instigate students and post-docs to get involved in ELN practices as soon as possible. Today’s early-career researchers, undergraduates and PhD students who have grown up with digital technology, preferentially tend to embrace electronic solutions. However, adoption of ELNs demands more than just a fine technology but a change of attitude and organization. For beginning students, it would be great to adopt ELNs from the start of their projects. Nevertheless, a parallel structure to support the old ones who have to do the transition appears essential. Also, it seems important to encourage new students to try distinct ELNs to find one that better suits the needs of the lab. Another alternative is to have one person in the lab designated to set up the best tool and then introduce it to the whole team. It is important to realize that the transition from paper to ELNs is not going to happen overnight: each lab needs to establish a model. In this process, adopting a balanced integration between electronic and paper format has been considered a better approach than the sudden replacement of paper by ELNs. Perhaps, adopting cloud storage may be a useful first approach to a number of labs. Overall, some companies marketing ELNs advertise that about 15-20% working time could be saved by adoption of ELNs. This number, as far as we know, has not been validated, but could be encouraging.

During the first entrepreneurship course developed in University of São Paulo School of Medicine (September to November/2018, Prof. Flávio Grynszpan) we focused into ‘ELN world’ and, in this process, we interviewed 31 science-related individuals including PIs, post-docs, PhD and master students, technicians and associated researchers. Surprisingly, most of them (>90%) did not even know about ELNs. Even more surprisingly, contrarily to the best investigative practices, the vast majority of interviewed investigators (excluding the PIs in this case) use scratch-paper notes to record their experiments and their processes and organizational structures for these notes are quite different. They usually update their lab notebooks daily (37%) or at least once a week (29%) and the others (34%) do not write up their experiments frequently, in some cases possibly not at all (our guess…). Almost uniformly, they were very interested in the idea of getting into the ‘ELN world’. Thus, a striking conclusion from this small sample is that in parallel with best ways to record experimental results, one has to struggle to implant good practices in data recording. We expect that ELNs will be of great help also along this direction.

sometimes some disruption is necessary to achieve progressOverall, the trend towards the ELN is taking force: it is an exciting time to try these tools in research and to get involved into a new era of information and science propagation. ELN implantation has a clear potential to address several questions related to improve accuracy and possibly reproducibility of science. In parallel, we acknowledge it is not so easy to move away from ingrained habits, but sometimes some disruption is necessary to achieve progress. Along this line, the most effective way to adopt ELNs may be… just do it !! …and then harvest the rewards of an improved work.


References and additional information

  1. J. Giles. Going paperless: The digital lab Nature, 481(7382): 430–1, 2012 | doi: 10.1038/481430a
  2. D. Butler. A new leaf Nature, 436(7047): 20–1, 2005 | doi: 10.1038/436020a
  3. H. K. Machina, D. J. Wild. Electronic Laboratory Notebooks Progress and Challenges in Implementation Journal of Laboratory Automation, 18(4): 264–8, 2013 | doi: 10.1177/2211068213484471
  4. R. Kwok. How to pick an electronic laboratory notebook Nature, 560(7717): 269–70, 2018 | doi: 10.1038/d41586-018-05895-3
  5. S. Guerrero, G. Dujardin, A. Cabrera-Andrade, C. Paz-y-Miño, A. Indacochea, M. Inglés-Ferrándiz, H. P. Nadimpalli, N. Collu, Y. Dublanche, I. De Mingo, D. Camargo. Analysis and Implementation of an Electronic Laboratory Notebook in a Biomedical Research Institute PLOS ONE, 11(8): e0160428, 2016 | doi: 10.1371/journal.pone.0160428
  6. S. Kanza, C. Willoughby, N. Gibbins, R. Whitby, J. G. Frey, J. Erjavec, K. Zupančič, M. Hren, K. Kovač. Electronic lab notebooks: can they replace paper? Journal of Cheminformatics, 9(1): , 2017 | doi: 10.1186/s13321-017-0221-3
  7. Splice website: splice-bio.com/the-7-best-electronic-lab-notebooks-eln-for-your-research/
  8. VIB Institute ELN webpage: www.vib.be/en/training/VIB%20Informatics/Pages/ELN.aspx
  9. Newsletter SciNote: scinote.net/blog/paper-and-electronic-lab-notebooks-can-work-together/

List of cited ELN websites

Percíllia Victória Santos de Oliveira (perc-I-am-here-illia.oliveira@hotmail.com@gmail.com) and
Patricia Nolasco Santos (pat-I-am-here-ty_ns8@@gmail.com@hotmail.com),
PhD students from the Vascular Biology Laboratory, Heart Institute (Incor), University of São Paulo, Brazil


Add new comment

Branding in Scientific Publishing

Submitted by redoxoma on Fri, 12/14/2018 - 21:15
Branding in Scientific Publishing

The Radical-Free Corner by Alicia Kowaltowski and Ignacio Amigo

A recent Facebook post by a colleague celebrated his publication in the journal Nature Scientific Reports. The publication was important and should be celebrated, except for the small detail that there is, in reality, no journal called Nature Scientific Reports, but instead a journal named Scientific Reports. While this journal is published by Nature Publishing Group - the same company responsible for highly selective titles such as Nature and Nature Medicine - it has a completely different acceptance policy, publishing scientifically valid and technically sound papers, irrespective of impact.

Adding “Nature” to the journal name may seem like an innocent mistake, but we can’t help but notice that many scientists are eager to have their names associated with prestigious periodical brand-names. The practice isn't limited to the Nature brand. For example, Cell Press publishes many Cell-titled journals which carry at least part of the prestige of the trendy high-impact journal Cell, but also produces a handful of journals without “Cell” in their name, including Neuron and Immunity. Scientists are now referring to publications in “Cell journal Neuron”, “Cell Immunity” or similar variations. This constitutes, in our view, an attempt to gain visibility for journals by associating them to prominent scientific brand names.

Scientific journal branding has grown in many ways over the last few years. Publishers of prestigious journals have launched numerous new publication venues using the visibility gained by their flagship journal names. Science Publishing Group, for example, now hosts 13 journals with “Science” as the first name in the title, in addition to the traditional and high-impact Science journal. Eight journals are published under the “Cell” brand name, 14 journals currently contain “The Lancet” in their title (including the highly influential medical journal The Lancet), and an impressive 57 journals have titles that begin with the word “Nature”.

This strategy seems to have worked, as many of these brand name journals are growing very rapidly. For example, Cell Reports was launched in 2012 and published 1040 scientific papers in 2017, an average of 2.8 papers a day. Nature Communications has gone from publishing 156 papers in 2010 to 4288 in 2017, a 2748% increase in seven years, and a current average of 11.7 papers per day. For comparison, Public Library of Science journals such as PLoS Biology and PLoS Medicine, which have similar impact factors and the same open access publishing strategy, publish more modest numbers of 200-300 papers per year.

The growth in publications is not a result of competitive pricing; Cell Reports charges US$ 5000 per paper, while publishing in Nature Communications costs US$ 5700, more than double the median price for open access publishing (US$ 2145, as uncovered by analyzing 894 open access journals with publically available prices). PLoS Biology and PLoS Medicine, on the other hand, charge US$ 3000 per article. PLoS One pioneered the acceptance of scientifically sound articles irrespective of impact, yet it has been surpassed by Scientific Reports as the world’s largest journal, despite the fact that the latter offers the same service at a higher price.

The interest to publish in high-priced brand-name journals could be related to a more careful editorial process. However, our own experience suggests that the editorial process is no better than that of other journals. Indeed, about one third of the evaluations of Nature Communications services posted in Scirev, a journal evaluation website, include complaints about delays in manuscript handling and poor editorial management. We suggest the primary reason for such interest is scientific branding itself. Having a brand name and social media-friendly URL such as Nature, The Lancet, Science or Cell linked to a publication is still a sign of prestige, even if these publishers are widening their audience and becoming increasingly less exclusive.

The shift towards what we call “scientific branding” is happening at the same time as the scientific community is actively discussing means to improve publication standards. Open topics include finding new ways to evaluate quality and impact, as well as using our limited resources – of time and money – in more efficient ways. Associating specific brands and paying high publication costs in exchange for perceived prestige is not a path we should follow.


Alicia Kowaltowski and Ignacio Amigo, from Department of Biochemistry,
Institute of Chemistry, University of São Paulo, Brazil


Add new comment

More on impact factor metrics: are we ready to get rid of them?

Submitted by redoxoma on Wed, 09/26/2018 - 21:20
Paper sheets

The Radical-Free corner by Francisco R. M. Laurindo

Recently, a young investigator wrote to Nature [1] urging to "stop saying that publication metrics do not matter and tell early-career researchers what does" when rating the scientific achievements of young investigators. This message highlights that increasing awareness against the inappropriate use of impact factor (IF) metrics for evaluating CVs is bringing, as a side effect, undertainty and lack of clarity on how someone's career achievements will be evaluated. This boils down to the simple question of whether we are ready to get rid of IF metrics.

First of all, it is important to say that it is increasingly accepted that using IFs as the sole metrics to evaluate someones's achievements in science is flawed by a number of reasons [2]. This tendency has led to the San Francisco "Declaration on Research Assessment" (DORA) in 2012, which has been since signed by over 500 institutions and 12000 individuals, calling among other issues, to "the need to eliminate the use of journal-based metrics, such as Journal IFs, in funding, appointment, and promotion considerations" and " the need to assess research on its own merits rather than on the basis of the journal in which the research is published ". In fact, recent experiences indicate that hiring investigators on the basis of addressing their achievements and contributions with interviews rather than traditional publication-based CVs has led to improved results [3]. Indeed, the current NIH-type biosketch (not dissimilar to FAPESP) is centered on the value of each individual's contribution to science.

All these welcome advances may have raised the perception, expressed in the comment from the first paragraph, that IF-related metrics do not matter any more. I believe the cold fact is that they still do to a reasonable extent and presently there is no completely adequate replacement for them, including the evaluation of young investigators. The distortions of strictly adhering to IFs and numeric scores should not be taken to mean that we have clear validated alternative methods available. Looking to someone's specific contributions and having a holistic approach when comparing a few candidates for academic purposes may indeed prove successful even with today's tools due to the low scale of this task. But even so, many evaluators still run their parallel evaluations of IF metrics, as this is yet so much embedded into our collective unconscious and provides some numerical scores with a security blanket of objectivity. However, problems become particularly acute when competitively evaluating a large number of CVs, e.g., regarding scholarships or large-scale research awards. As unfair and innaccurate as it would be to blindly rely only on the metrics, it would also be unfair to ignore them. Despite the limitations, there is indeed some gross correlation at least between the highest journal IFs with the quality and completeness of published work and ensuing amount and quality of the effort put into it. Along this line, IFs and number of articles do tell something about the capacity of the individual to choose important problems, to focus deep into a given problem until the end, to work hard and intensively into scientific questions and to be able to finish coherent stories about them. For the early-career investigator, relying solely on citations can be innappropriate because many good articles take a long time to get cited. Thus, number of published articles and their IF-related metrics are still a default basis for early-career CV evaluation. In fact, much before IFs were invented, everyone knew the most prestigious publications and those who published on them were positively considered.

On other hand, the limitations of those metrics are real. I believe that down the road these numbers can only help separate candidates that are very good or excellent from those that are merely good or median. However, metrics can significantly fail when trying to separate the top candidates from those that are just not as excellent. So, what can be done to improve on these issues? The DORA followers are getting away with all the metrics, however this leaves everyone, specially the young investigators, uneasy about how they will be evaluated and what are the best career strategies [1]. Without having the illusion (or arrogance?) that I will set the last word on this complex subject, I believe we are not yet ready to get rid of IF metrics in general, but the system can indeed take several extra paths to perfect, reinterpret and at the end eventually ignore them. Here I list some features that are increasingly being taken into account in the evaluation of young investigator's CVs. In the absence of a better collective term, I will call them modifiers, that is, each of them can potentially enhance or decrease potential inferences derived from metrics.

An important modifier is the intrinsic quality of the work, that is, overall degree of innovation, extent of contribution, implications for novel ideas or for potential applications, accuracy and completeness of the investigative strategy, and so on. Logically, the works having these qualities will usually take much more time to be performed and this allows less time to publish other papers, potentially decreasing the number of published items. Additionally, in some cases, these works may be published in journals that despite their solid reputation and tradition, along with lengthy and demanding reviews, do not display proportionally high IFs. Such are the cases of Journal of Biological Chemistry, Journal of Molecular Biology, American Journal of Physiology… among others. Contrarily, some journals use a number of strategies to unrealistically maximize their IFs and the intrinsic value of the work they publish may not be proportionally as high (this is a good theme for future discussions). Evaluators should take all these issues into account. However, it is important that the scientist being evaluated does not assume that reviewers will appreciate the quality of someone's work by default: reviewers are uniformely very busy and may not be from the particular subarea that would readily understand the specifics. Thus, the intrinsinc qualities of each one's work have to be explicitly clarified by the author. The investigator's biosketches from many research agencies, including Fapesp, provide appropriate space for the investigator to write in a few lines what is the contribution and novelty of that paper, as well as anything else that can indicate intrinsic value (e.g., it is a pioneer work in specified aspects, it contributed to diagnostic or therapeutic advances, it served as a basis for public policies, etc) or the community's perception of value (e.g., it promoted the invitation for a relevant talk, it was the theme of an editorial comment or chosen as the cover article, etc). This will contribute to identify intrinsic qualities in published work, which will adjust and improve the interpretations of numeric scores, in some cases even allowing one to get totally out of them. Interestingly, for unclear reasons, investigators have rarely made use of this strategy at Fapesp, although the biosketch format allows that.

On the opposite side, in some cases young investigators display a CV characterized by a large number of publications, however of intrinsic low value: incremental contributions, not-so innovative advances or questionable methodology. Given the conundrums of the scientific publishing scenario nowadays, these works do get published somehow. In other cases, the works are multiauthored without a clear contribution of the author being evaluated, which sometimes will appear as a middle-author amongst several others. There is nothing wrong – and actually it is good – to get involved in many investigations from a given group. Moreover, in some cases of multiple high-quality cooperative work, a middle position by no means indicates a negligible contribution. Again, disclosing the author's contribution for that particular work in the space provided in the biosketch is essential and will help understand the potential value of the author's contribution and how it differs from the so-called "salami-splicing" type of CV. Moreover, I suggest that the authors separately highlight, in their biosketches, only their few principal works by which they want to be evaluated and leave the others as a group of collaborations. That will avoid that noise from too many works obscures what really matters.

A further enhancer in a CV is what I would call "vertical coherence", that is, the connectivity accross each of the investigator's papers, allowing one to foresee the emergence of an investigative track. This multiplies the importance of each work, so their overall value is larger than the sum of each part. Again, these connections must be emphasized and explained by the investigator.

Another relevant aspect is that there are other dimensions of impact of a scientific work that transcend the scientific sphere. This is recognized now by many research agencies, including Fapesp, and comprise : 1) Social relevance and 2) Economic impact. Furthermore, in some areas, general metrics of impact are lower than those in other areas as a characteristic of the field. Again, in such cases, the relevant information will not be readily obvious for reviewers that are not too specialized and thus should be clearly highlighted by the investigator.

These considerations indicate that a number of parallel aspects can affect the perception and interpretation of the IF metrics, providing a more accurate and fair picture. Are these modifiers subjective? Perhaps yes to a good extent, but one has to balance the problems. Certainly, at this time we still face a paradox. As discussed above, IF metrics is still embedded into the system. On the other hand, it is likely possible that incorporating the modifiers described in this essay and trying more and more to have a systematic approach to them will enhance the capacity to select the best achievements while getting off the numerical score tiranny. Decorating the basic metrics with such a "systematic subjectivity" evaluation and perfecting it along time seems more realistic, feasible and less traumatic to early-career scientists than just abandoning metrics all of a sudden. While we love to hate IFs, they are still deep into our minds.


  1. J. Tregoning. How will you judge me if not by impact factor? Nature, 558(7710): 345, 2018 | doi: 10.1038/d41586-018-05467-5
  2. J. K. Vanclay. Impact factor: outdated artefact or stepping-stone to journal certification? Scientometrics, 92(2): 211-38, 2011 | doi: 10.1007/s11192-011-0561-0
  3. S. L. Schmid. DORA Molecular Biology of the Cell, 28(22): 2941-4, 2017 | doi: 10.1091/mbc.e17-08-0534

Francisco R. M. Laurindo, Editor in Chief of Redoxoma Newsletter
Heart Institute (InCor), University of São Paulo Medical School, Brazil


Add new comment