Wednesday, March 29, 2017

B - Full discovery: the publisher's role

Dove JG. Full discovery: what is the publisher's role? Learned Publishing 2017;30(1):81-86
(doi: 10.1002/leap.1086)

Efforts over the years to improve content discoverability have made great progress, but an increasing amount of freely available content brings up new issues. Readers of all kinds rely on a variety of ‘discovery pathways’, such as search engines, library systems, and various electronic links, some of which are blind to the content they desire. The National Information Standards Organization (NISO)’s Discovery to Delivery (D2D) Topic Committee has developed a grid comparing various ways in which content is shared with various ways in which users discover such content.This article brings to light a few of the current obstacles and opportunities for innovation by publishers, aggregators, search engines, and library systems.
http://onlinelibrary.wiley.com/doi/10.1002/leap.1086/full

B - Evidence-based review of Open Access

Tennant JP, Waldner F, Jacques DC, et al. The academic, economic and societal impacts of Open Access: an evidence-based review. F1000Research 2016;5:632
(doi: 10.12688/f1000research.8460.3)

This review presents published evidence of the impact of Open Access on the academy, economy and society. Overall, the evidence points to a favorable impact of OA on the scholarly literature through increased dissemination and reuse. OA has the potential to be a sustainable business venture for new and established publishers, and can provide substantial benefits to research- and development-intensive businesses, including health organisations, volunteer sectors, and technology. The social case for OA is strong, in particular for advancing citizen science initiatives, and leveling the playing field for researchers in developing countries.
https://f1000research.com/articles/5-632/v1

B - Accountability in publishing

Mani H. Foot print of a paper: accountability in academic publishing. The Lancet 2016;338(1004):562-563
(doi: 10.1016/S0140-6736(16)31217-X)

At the moment, the publishing process is unaccountable to the readers and is not transparent. In a published paper, there is no record of previous submissions to other journals and the comments it might have received in the journey to the final publication. A transparent and openly recorded submission and review process would result in accountability, improve the quality of papers and the peer review process, and reduce the chances of previously reported systematic cheating. The scientific input of a reviewer can also be included in their academic activities. A database for registering any paper before submission could issue an internationally recognised identification number that could help to track the submissions.
http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(16)31217-X/abstract

B - Journal self-citations

Heneberg P. From excessive journal self-cites to citation stacking: analysis of journal self-citation kinetics in search for journals, which boost their scientometric indicators. PLoS One 2016;11:e0153730
(doi:10.1371/journal.pone.0153730.s001)

Little is known about kinetics of journal self-citations. The author hypothesized that they may show a generalizable pattern within particular research fields or across multiple fields. Currently used scientometric indicators provide only limited protection against unethical behaviors. An algorithm is needed to be developed to search for potential citation networks, allowing their efficient elimination. The algorithm could be based on differences in a number of citations received from a respective journal during the impact factor calculation window (post-publication years 1–2) and the number of citations received only later (e.g., post-publication years 4–7).
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4835057/

B - Publication ethics statement

Gasparyan AY, Yessirkepov M, Voronov AA, et al. Statement on publication ethics for editors and publishers. Journal of Korean Medical Science 2016;31(9):1351-1354
(doi: 10.3346/jkms.2016.31.9.1351)

Editors and publishers are frequently encountered with the fast-growing problems of authorship, conflicts of interest, peer review, research misconduct, unethical citations, and inappropriate journal impact metrics. The aim of this Statement is to increase awareness of all stakeholders of science communication of the emerging ethical issues in journal editing and publishing and initiate a campaign of upgrading and enforcing related journal instructions.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4974174/

B - Systematic reviews

Barbui C, Addis A, Amato L, et al. Can systematic reviews contribute to regulatory decisions? European Journal of Clinical Pharmacology 2017;73(4):507-509
(doi: 10.1007/s00228-016-2194-y)

Discusses the potential usefulness of systematic reviews in responding to regulatory needs. By collecting, analysing and critically appraising all relevant studies on a specific topic, they may be used by different stakeholders as a basis for making clinical and policy recommendations, including regulatory recommendations. They may simultaneously produce new findings and summarize existing knowledge, with the potential of informing regulatory decisions more pragmatically and more rapidly than other research designs.
http://link.springer.com/article/10.1007%2Fs00228-016-2194-y

Tuesday, March 28, 2017

B - Reproducibility

Allison DB, Brown AW, George BJ, et al. Reproducibility: a tragedy of errors. Nature 20163 Feb. 3 530(7588):27-9
(doi: 10.1038/530027a)

Mistakes in peer-reviewed papers are easy to find but hard to fix. Post-publication peer review is not consistent, smooth or rapid. Many journal editors and staff members seemed unprepared or ill-equipped to investigate, take action or even respond. The authors summarized their experience, the main barriers they encountered, and their thoughts on how to make published science more rigorous.
http://www.nature.com/news/reproducibility-a-tragedy-of-errors-1.19264

B - Are pseudonyms ethical in publishing?

Teixeira da Silva JA. Are pseudonyms ethical in (science) publishing? Neuroskeptic as a case study. Science and Engeneering Ethics 2016.
(doi: 10.1007/s11948-016-9825-7)

In science publishing, there are increasingly strict rules regarding the use of false identities for authors, the lack of institutional or contact details, and the lack of conflicts of interest, and such instances are generally considered to be misconduct. The author focuses on Neuroskeptic, a highly prominent science critic, primarily on the blogosphere and in social media, highlighting the dangers associated with the use of pseudonyms in academic publishing.
http://link.springer.com/article/10.1007/s11948-016-9825-7

B - India's publication in predatory journals

 
Seethapathy GS, Santhosh Kumar JU, Hareesha AS.  India's scientific publication in predatory journals: need for regulating quality of Indian science and education. Current Science 2016;111(11):1759-1763
The objective of this study was to estimate which category of educational and research institutes predominantly publishes in predatory open access journals in India and to understand whether academicians in India are aware of predatory journals. Results showed that India is lacking in monitoring the research carried out in higher educational and research institutes.


 

B- Writing an effective article submission letter

Writing an effective journal article submission cover letter. San Francisco Edit 2017

The journal editor is going to decide whether to send the article to the reviewers by reading the letter and the abstract of your manuscript. The cover letter is an important component of the submission process. It should contain information which will generate interest and encourage the journal editor to evaluate the manuscript.
http://www.sfedit.net/

B - History of peer review


Baldwin M. In referees we trust? Physics Today 2017;70(2):44-49
(doi: 10.1063/PT.3.3463)
                                                                       
The imprimatur bestowed by peer review has a history that is both shorter and more complex than many scientists realize. This article reviews the history of peer review both for journals and grant-giving  bodies and reveals that it has had many changes only becoming the standard for scientific acceptability relatively recently. It discusses the present situation and the pressures it faces today.

B - Review of altmetrics

Gonzalez-Valiente CL, Pacheco-Mendoza J, Arencibia-Jorge R. A review of altmetrics as an emerging discipline for research evaluation. Learned Publishing 2016;29(4).229-238
(doi: 10.1002/leap.1043)

This article analyses the scientific production of publications on altmetrics as an emergent discipline for research evaluation with the aim to identify the investigative tendencies that characterize the subject. About 253 documents indexed by Web of Science and Scopus databases were retrieved, showing a growth in articles 2005–2015. Half of the publications come from the USA and the UK.
The highest co-occurrence of terms was social media-altmetrics, followed by Twitter-altmetrics.
http://onlinelibrary.wiley.com/doi/10.1002/leap.1043/full

B - Evolution of impact and productivity

Sinatra R, Wang D, Deville P, et al. Quantifying the evolution of individual scientific impact. Science 2016;354(6312)
doi: 10.1126/science.aaf5239

Are there quantifiable patterns behind a successful scientific career? Sinatra et al. analyzed the publications of 2,887 physicists, as well as data on scientists publishing in a variety of fields. They quantified the changes in impact and productivity throughout a career in science, finding that impact, as measured by influential publications, is distributed randomly within a scientist’s sequence of publications.
http://science.sciencemag.org/content/354/6312/aaf5239

Wednesday, March 22, 2017

B - Translational medicine data

Satagopam V, Gu W, Eifes S, et al. Integration and visualization of translational medicine data for better understanding of human diseases. Big Data 2016;4(2):97-108
(doi: 10.1089/big.2015.0057)

The authors present an integrated workflow for exploring, analysis, and interpretation of translational medicine data in the context of human health. Three Web services—tranSMART, a Galaxy Server, and a MINERVA platform—are combined into one big data pipeline. Native visualization capabilities enable the biomedical experts to get a comprehensive overview and control over separate steps of the workflow. The workflow is available as a sandbox environment, where readers can work with the described setup themselves. This work shows how visualization and interfacing of big data processing services facilitate exploration, analysis, and interpretation of translational medicine data.
http://online.liebertpub.com/doi/abs/10.1089/big.2015.0057

B - Patient perspectives and clinical research

Crowe S, Giles C. Making patient relevant clinical research a reality. BMJ 2016;355:i6627
(doi: 10.1136/bmj.i6627)

A wide gap exists between what generally receives funding and what patients, carers, and the public would like to see examined. Incorporating patient perspectives more thoroughly into clinical research would broaden its scope and help answer the research questions likely to bring about the biggest improvements in our understanding of disease. Nevertheless several problems underlie our current inability to make research relevant to patients and the wider public. The BMJ already insists that all submitted research includes a statement describing how the authors did or did not involve patients. The journal also operates a system of patient peer review. If other medical journals follow suit, the message about patient relevant research is more likely to be heard.
http://www.bmj.com/content/355/bmj.i6627


B - Comparison of primary outcomes in protocols

Perlmutter A, Tran VT, Dechartres A, et al. Comparison of primary outcomes in protocols, public clinical-trial registries and publications: the example of oncology trials. Annals of Oncology 2016;mdw682
(doi: 10.1093/annonc/mdw682)

In oncology trials, primary outcome descriptions in ClinicalTrials.gov are often of low quality and may not reflect what is in the protocol, thus limiting the detection of modifications between planned and published outcomes. The authors compared primary outcomes in protocols, ClinicalTrials.gov and publications of oncology trials and evaluated the use of ClinicalTrials.gov as compared with protocols in detecting discrepancies between planned and published outcomes.
https://academic.oup.com/annonc/article-lookup/doi/10.1093/annonc/mdw682