Quantcast
Channel: Editage Insights
Viewing all 4754 articles
Browse latest View live

"If it is correct, economics would be different"

$
0
0
"If it was correct, economic would be different"

Rejection is part of academic publishing, and it is by no means an indication of the quality of study or the researcher's worth. This post discusses an anecdote involving Nobel Laureate George Akerlof - how his rejected paper led him to win the most recognized prize in science. 

If this is correct, economics would be different.” With these words, the editor of the Journal of Political Economy (circa 1970) began a letter of rejection addressed to George Akerlof. As a freshly minted PhD from MIT, Akerlof was quickly coming to terms with the realities of academic publishing. His paper titled "The Market for Lemons" had been previously rejected by the two other leading economic journals American Economic Review and Review of Economic Studies, the consensus view being that the paper's insights were 'trivial'.

To those of us who toil in the trenches of academic publishing, such feedback (and often worse) is the coin of the realm. However, Akerlof graciously concedes that the referees were correct; after all, he was discussing the mundane tediousness of used-car sales. The paper was eventually published in the Quarterly Journal of Economics and its simplistic notion that some market participants know more than others (information asymmetry) helped the author win the 2001 Economics Nobel.

Tempting as it may be, we must refrain from using this anecdote as a stick with which to beat the peer-review based system of journal publication. The gatekeepers to the promised land of publication success can definitely be nitpicky but we need them guarding those proverbial walls. However, Akerlof's trials serve as a reminder that one should develop a thick skin when it comes to negative/critical feedback and accord due honor to perceived personal failure. 

Recommended reading:

Can journal rejections sometimes work to your advantage?

Turning manuscript rejection into a positive experience

Should I throw away my rejected manuscript?


What does it mean if two editors have been assigned for my paper?

$
0
0
Question Description: 

I submitted a paper and it was assigned to the Assistant Editor. Again after two days I got an email saying that the paper is being handled by the Editor-in-Chief. Now the staus is Under Review. Why was the paper assigned to two editors before review? What does that indicate? Could you please clarify this for me?

Answer

I understand that you are quite anxious about the submision of your paper to your target journal. However, there is nothing you need to worry about. It is true that a journal editor or editor-in-chief (EiC) is responsible for making all major manuscript related decisions. But in some cases, journals have Associate Editors, Assitant Editors, or Section Editors to help the EiC manage large submission volumes and make important decisions. So in this case, the Assistant Editor initially screened your paper and forwarded it to the EiC for consideration. And the editor decided that your paper was worthy of being considered for a peer review (hence the Under Review status). So everything is fine and your paper is being peer reviewed.

Is it fine to inquire about a paper that's been under review for over 8 months?

$
0
0
Question Description: 

Hi Dr. Eddy, I have submitted a paper in one of SIAM journals. The paper is under review for 8.5 months. I asked the editor 1 month ago about it, I was told that one review report has come and they are waiting for the other. Will it be ok if I politely inquire about it now (I have inquired 3 times till now after submission). How frequently I should inquire from now on so that it is not rude. Another question is that, is it the usual time taken for review in SIAM journals?

Answer

It would have helped to know your field mentioned your field. SIAM has quite a few mathematics journals. Typically, the review period is exceptionally long in te field of mathematics. So if you've submitted to a math journal, the review time of 8.5 months would be considered quite normal. However, even then, it would be a good idea to send an inquiry to the journal editor. This sometimes makes them take notice and might help speed up the process.

However, if your field is not mathematics, you definitely have reason to worry. In that case, you should be more proactively sending reminders to teh journal. You can write to the editor every two to three weeks requesting him/her to expedite the process. If you see that there is no change in status even after a month, you could consider withdrawing the paper and submitting it to a rapid publication journal. Perhaps you could also consider uploading your manuscript on a preprint server. That way, you will be able to establish precedence even if it takes time for your article to get published. You can read these articles to learn more about posting your paper on a preprint repository:

http://www.editage.com/insights/the-role-of-preprints-in-research-dissemination

http://www.editage.com/insights/getting-scooped-as-a-result-of-delays-at-the-journal-end-a-case-study

Does "Under Review" mean the paper has been assigned to an external reviewer?

$
0
0
Question Description: 

I submitted a paper for Springer Journal and it said "editor invited," then after one month it said "editor assigned," and now it says “Under Review". Does "Under Review" mean that the paper has passed the editorial check and been given to the external reviewer?

Answer

For most journals, the status "under review" is used to indicate that the paper has cleared the editorial check and has been sent for external review. However, status descriptions vary across journals, and some journals also use the status "under review" when the manuscript is under the initial editorial screening or review, and use "reviewers assigned" or "reviewers invited" to indicate that your paper has been assigned to external peer reviewers. You can check the journal's website to see if the statuses they use have been mentioned. If there has been no indication of reviewers being assigned, you will have to wait and see what the next status is to get a clearer view. If there is no change for more than a few weeks, politely write to the journal editor asking about the status of your submission.

Recommended reading:

How long will the status of my paper show "Under Review?"

How long does it take for the status to change from "reviewers assigned" to "under review"?

Can I change the first author of my journal article after submission?

$
0
0
Question Description: 

The paper I submitted to an SCI journal has almost been accepted. I am at the minor revisions stage. Would it be okay to change the first author at this stage? It is in fact too difficult for me to state the reason here, but the researcher who has been marked as the first author in this case has no idea about the research and has basically not made any contribution to the work. What do I do?

Answer

This is not an ideal situation. I am assuming that you are the submitting author, i.e., the author who submitted the paper on behalf of all the contributing authors. To begin with, you should never have submitted your paper to the journal without deciding upon the order of authors. And if the current first author did not make any significant contribution to the study, he/she should not have been attributed as the first author at all because that is unethical.  

Typically, journal editors do not encourage changes to authorship, particulary after a manuscript has been processed. When an authorship dispute arises after a paper has been submitted/accepted for publication, journals require all the authors involved to give consent for this change. So in your case, it would be advisable to write to the journal editor stating the reason for the change. If you are the author who submitted the paper, you should lead this communication. But know that the editor would want the current first author to sign off on this change because editors are not responsible for determining authorship, the authors themselves are. And consent/approval of all authors regarding the order in which they are listed is essential. So you may need to ensure that the researcher who is currently listed as the first author consents to this change. This may not be a pleasant situation - communicating with the journal editor and first author about this - but it is the most ethical one and one I would advice you to follow.

Check out this case study published by the Committe on Publication Ethics (COPE), which discusses a case where the order of authorhip of a paper changed after acceptance.

What does 'primary cc e-mail address' refer to in journal communication?

$
0
0
Question Description: 

In the journal submission stage, I have been asked to fill in the 'primary cc e-mail address.' What does this mean?

Answer

The primary cc e-mail address is the email address that you will primarily use for communication with the journal. The journal will use this address to send you e-mails. For researchers, it is generally their institutional e-mail address. However, these days some researchers use alternative e-mail addressees such as a Gmail address instead of their institutional e-mail. However, if you are using your non-institutional address, make sure that this is the address mentioned in your website and is one that you use for other professional purposes. 

Seven Earth-sized exoplanets discovered in habitable zone

$
0
0
Seven Earth-sized exoplanets discovered in habitable zone

NASA’s Spritzer Space Telescope has discovered seven Earth-sized exoplanets that are likely to have liquid water. All seven planets closely circle around a single dwarf star Trappist-1. The system is about 40 light years away from Earth in the constellation Aquarius. Of the seven, three planets are in the habitable zone and have the highest chances of having life. "Answering the question 'are we alone' is a top science priority and finding so many planets like these for the first time in the habitable zone is a remarkable step forward toward that goal," said Thomas Zurbuchen, associate administrator of the agency's Science Mission Directorate in Washington. The size of the planets is similar to that of Earth, and interestingly, this system’s orbit is so tightly knit that it takes them around 1.5 to 20 days to rotate around their star. This indicates that they may be “tidally locked” i.e. they might have only one side facing the star, and thus, one side of the planet might always be dark and the other always bright. As a result, the atmosphere on these planets is likely to be very different from that of the Earth. Following up on this discovery, NASA's Hubble Space Telescope will screen the planets in the habitable zone for signs of life.

Read more in Science Daily and The Guardian.   

Facebook Live Q&A - Tracking your manuscript in journal submission systems

$
0
0
Facebook Live Q&A - Tracking your manuscript in journal submission systems

We have the perfect forum where you can learn about and also clarify all your doubts about the manuscript tracking system. Join our Facebook Live Q&A on March 2, 2017, 8:00 A.M. EST and get all your manuscript submission related questions answered by our expert.

Do you have questions about the manuscript tracking system?
Do you find yourself struggling to understand various submission statuses?
Do certain submission statuses make you feel anxious?

We have the perfect forum where you can clarify all your doubts about the manuscript tracking system. Join our Facebook Live Q&A on March 2, 2017, 8:00 A.M. EST and get all your manuscript submission related questions answered by our expert.
 

Kakoli Majumder, Live Q&A                                          

This session will take place on the Editage Insights Facebook page. Kakoli Majumder, the expert behind the Q&A forum of Editage Insights, will answer your questions based on her ample experience in dealing with authors’ queries regarding manuscript submission.

Here are some of the topics she'll deal with in the Live Q&A:

  • Various submission statuses displayed by online submission systems of journals
  • The duration for which each submission status generally lasts
  • The plausible implications of varied submission statuses
  • Editorial decision-making at journals 


Join us for an interactive discussion at the Facebook Live Q&A session and tell us which submission status makes you most anxious. You could also share your manuscript submission related experiences and post all your questions in the comments section below the Facebook Live video, and Kakoli will answer as many as possible.  

Want to receive a Facebook notification as soon as the Live Q&A begins?
Simply click here, and ‘LIKE’ the Editage Insights Facebook Page!


Quiz - How well do you know journal submission statuses?

$
0
0
Quiz - Are you familiar with all the manuscript statuses in journal submission systems?

Take this fun quiz to figure out how well you understand journal submission systems and their various statuses. Along with finding out how much you know, bridge the gaps in your knowledege by learning from the feedback after each question.

We often get manuscript submission related questions though our Q&A forum. Journal submission can be taxing for authors and even more confusing for them to track their manuscript status through journal submission systems. It is often difficult for authors to understand the meaning of a particular status or even the expected sequence of submission statuses.

How well do you know the varied journal submission statuses?
Can you always understand what they mean?
Want to test your understanding about various submission statuses?
Then take this fun quiz and learn more in just 5 minutes!

Quick instructions:

  • Be honest to get a true evaluation of how much you know. Your answers and final result will only be visible to you.
  • Read all the options before selecting one. Once you select an option, you will receive immediate feedback about whether your answer is correct or incorrect. Don't worry! In case your answer is incorrect, you will learn the correct answer through the feedback explanation.
  • Have fun! And share the quiz with your friends and colleagues once you're done.

 

 

If you enjoyed taking the quiz, then I'm sure you would love our Facebook Live Q&A on March 2, 2017, 08:00-09:00 A.M. EST. At the Live Q&A, Kakoli Majumder, the expert voice behind our Q&A forum, will address all your questions about manuscript status-tracking and journal decision-making on our Facebook page. Don't miss out! Click here to find out more about the Live Q&A. 

My paper's status changed from 'under review' to 'editor assigned.' What does it mean?

$
0
0
Question Description: 

Dear Dr. Eddy, I had submitted my manuscript in a Springer journal on 18th Jan 17. The manuscript showed 'under review' status from 24th Jan to 20th Feb. However, it again changed to "editor assigned" today. What does it mean?

Answer

The the status "Editor Assigned" typically indicates that an Associate Editor (AE) has been assigned for your paper. The AE is responsible for processing your paper from initial editorial screening to the final decision. It is the AE's responsibility to send your paper for peer review. Since the AE has just been assigned, it is clear that the paper has not yet been sent for peer review. Therefore, the "under review" status that was displayed soon after submission probably referred to an internal review or admin check that is usually done by editorial assistants. The purpose of this check is to see whether the journal guidelines have been followed with regard to style and formatting and whether the submission package includes all the required documents. Thus, I think that for this journal, the status "under review" is used for both internal and external review. So basically, your paper has cleared the admin check and has just been assigned to an AE. The AE will conduct the initial editorial screening to check the scientific value and relevance of the manuscript to the journal. You paper will be set for peer review only once it clears this screening.

Recommended reading:

Tracking your manuscript status in journal submission systems

Can you explain what is an arXiv publication?

$
0
0
Question Description: 

What is the procedure of submission to arXiv? After submitting, how can I submit the same paper to another journal? 

Answer

ArXiv is not a journal: it is a public server or repository where authors can upload different versions of their research paper to make it freely available online. Authors generally upload preprints of their manuscript on arXiv before submitting their paper to a journal.

To submit your manuscript, you have to create an account on arXiv and upload the manuscript. Once the manuscript is uploaded, it goes through a quick check to ascertain that it is scientific in nature. It is then posted online within a day or two without peer review and is made freely available for everyone to view.

The purpose of uploading a preprint on arXiv or any other repository is to give readers immediate access to your paper. Publication in a peer-reviwed journal is a time consuming process: however, timely dissemination of research results is required to accelerate scientific progress. This led to the practice of distributing pre-prints. However, preprints are not publications: they are just a way to give people access to your research. For your work to gain the stamp of credibility, it must be published in a peer reviewed journal. 

Even if you upload a preprint on a server, it is acceptable to submit the same version of the manuscript to a peer reviewed journal. Most journals allow authors to deposit pre-prints of their work that do not contain any edits or revisions from the publication process in open repositories. However, there may be some restrictions and/or specifications at the journal end about submitting papers that have been deposited in a repository. You should go through your target journal's instructions for authors carefully before you submit your paper. Additionally, it is always a good idea to inform the journal editor at the time of submission that you plan to deposit or have already deposited your paper in a repository.  

You might be interested in reading the following articles:

Academic publishing and scholarly communications: Good reads, February 2017

$
0
0
Good reads, February 2017

This month, we have an interesting list of recommendations for you. We tell you about the situation in the UK post Brexit, a recommended list of 15 best practices to rebuild the trust in scholarly publishing, a new proposal to combat the irrerproducibility crisis, fake news about science, couples in research, and much more!

Every month, the Editage Insights team sifts through hundreds of posts, articles, updates, and blogs to stay on top of some of the hottest debates in scholarly publishing. The idea is to identify the top stories of the month and share them with you. This month, we have an interesting list of recommendations for you. We tell you about the situation in the UK post Brexit, a recommended list of 15 best practices to rebuild the trust in scholarly publishing, a new proposal to combat the irreproducibility crisis, fake news about science, couples in research, and much more!

  1. Scientific Advisors not yet appointed in the UK: In the past the UK has employed researchers as Chief Scientific Advisors (CSAs) to play senior advisory roles and influence policy at the larger level. However, after Brexit, two of the departments in charge of planning the UK's exit from the EU - the Department for Exiting the European Union (DExEU) and the Department for International Trade (DIT) - have not yet appointed a CSA, nor have they indicated any intentions of doing so. This has increased the anxiety among scientists in the UK. According to Robin Walker, a DExEU minister, the department was still speculating the need for CSAs. Science policy experts worry that if CSAs are not appointed, scientists won't be able to help the government make informed policy decisions that would benefit society.
     
  2. Is academic publishing suffering from diseases? John Antonakis, a psychologist and editor of the journal The Leadership Quarterly, has described the problems in science publishing in terms of diseases in his paper "On doing better science: From thrill of discovery to policy implications." According to him, science is suffering from five diseases: (a) Significosis, the obsession to produce statistically significant results; (b) neophilia, giving excessive weightage to novel results; (c) theorrhea, an excessive desire for new theory, which particularly affects social science branches; (d) arigorim, a deficiency of thoroughness in empirical and theoretical work; (e) disjunctivitis, which indicates the tendency to gravitate towards producing large quantities of works that are redundant. Antonakis believes that all of these diseases that have common causes: the incentives to doing research, the research and publishing practices, and the conditions under which research is done. He calls upon researchers, editors, and funders to prevent, diagnose, and treat these diseases. 
     
  3. Time to do away with fake news: Fake news is not new to science, but what is concerning is the use of social media to disseminate such news much faster. Addressing scientists at the annual meeting of the American Association for the Advancement of Science, communications expert Dominique Brossard said that in the context of science, the spread of fake news through online social networks like Facebook and Twitter gets murkier as it is difficult to identify whether the false information was spread intentionally or was just the result of a poorly conducted study. The best way to avoid misinformation in science news would be to have scientists take on the responsibility for communicating science, and working with journalists to help explain and contextualize their work. 
     
  4. A solution to tackle the reproducibility crisis: Jeffrey S. Mogil, director of the Alan Edwards Centre for Research on Pain at McGill University, and Malcolm Macleod, professor of neurology and translational neuroscience at the University of Edinburgh, discuss a novel way of ensuring reproducibility of animal study based medical research papers in this Nature article. They propose that for papers that deal with animal studies of disease therapies or preventions should include a trial constituting "an independent, statistically rigorous confirmation of a researcher's central hypothesis." They call this a confirmatory study and it would have to follow high standards of analysis and reporting, would be tested by an independent laboratory, and would be held to a “higher threshold of statistical significance." While the pair admits that this proposal may be difficult to execute, they urge funders and assessment committees to demand such measures to ensure that the published paper has valid results. It would also encourage researchers to be more thorough with their research.
     
  5. Researchers should not need a blacklist to identify bad publishing venues: The discontinuation of Jeffrey Beall’s list of predatory publishers last month led to some amount of consternation among the science community, with many scientists expressing the need for a replacement or a new equivalent of Beall’s list. In this interesting article, Cameron Neylon explains why he has never been a supporter of the Beall’s list and outlines why he believes the concept of the blacklist itself is fundamentally flawed. According to Neylon, blacklists are by definition incomplete. Additionally, they are also highly susceptible to legal challenge and vulnerable to personal bias. Scholars should be able to independently identify a good venue from which to communicate their work, he says.
     
  6. 15 Ways you can stand up for the cause of science: In this inspiring post, Alice Meadows, Director of Community Engagement & Support, ORCID, talks passionately about the recent issues plaguing scientific research and publishing. Too many things have gone wrong already, Alice argues - from unethical publication practices to poor policy level decisions due to vested political interests. She then shares a list of 15 things everyone in academia and scholarly publishing must to do to help rebuild trust in scientific research and scholarly communication. Her list includes some solid recommendations such as joining the Reproducibility Initiative, considering open access in some form, supporting the Committee on Publication Ethics (COPE), and using tools that help researchers create, store, manage, and share their data effectively.
     
  7. How researcher couples manage their work and personal lives:"There are many couples in science," says Amber Dance in this interesting article, which talks about how it is quite common for researchers to enter into relationships and work alongside each other. The advantages of having a partner/spouse at work or in the same department or institution are numerous. The benefits could be non-academic, e.g., carpooling and a mutual understanding of the demands of their partners' work, or academic, e.g., peer reviewing for each other. However, researcher couples also need to be careful especially when their work or interactions could give rise to conflicts of interest. Another challenge is maintaining work-life balance and ensuring that their work doesn't seep into their personal interactions.

So did you enjoy reading the posts we recommended for you this month? Have you read any of these already? Do you have an opinion to share? Feel free to share your thoughts in the comments section below. You can also follow our Industry News segment, where we share regular updates on what the academic publishing industry is talking about.

What can I do if the review of my paper that I need for graduation is delayed?

$
0
0
Question Description: 

I have submitted a paper to an English journal more than 100 days ago, but the first decision by peer review has not been made yet. This paper is a base of my doctoral thesis, thus the paper must be accepted within this school year (i.e., by the end of March). I have already informed the editorial office of my situation and asked them to make a decision soon, but it still takes time. Is there any good idea to make progress this situation? I heard that one peer review was already done, and another peer review is still in progress.

Answer

Unfortunately, journal processes are quite slow and peer review can take anywhere between 2-4 months or even more. In general, the entire process from submission to first decision can take 6-8 months. You should have actually submitted your paper earlier. I don't think you can do much at this stage to hasten the process apart from sending reminders to the editor from time to time. You should write to the journal every 2 weeks asking about the progress. 

Another option you can consider is withdrawing the paper from this journal and submitting it to a rapid publication journal. You can check the rapid publication journals in your field which have a really short review time. You can send a pre-submission inquiry explaining by when you need the decision and asking whether they would be able to give the decision within that time. If the journal agrees, you can consider submitting your paper. However, make sure that you have a confirmation of withdrawal from the previous journal before you submit to the other journal.

Recommended reading:

How to write a withdrawal letter to the journal

What can I do if the editor does not confirm my withdrawal request?

How to write an email declining an invitation to submit my paper?

$
0
0
Question Description: 

I was requested by an academic journal to publish my paper which was presented an international conference. However, my paper was already submitted to another journal and accepted. I am not sure how to decline this request. Please teach me how to write an English mail.

Answer

You can use this template to decline the invitation to submit your paper:

Dear Mr./Ms./Dr. [Editor's name],

Thank you for your invitation to submit my manuscript titled [Insert manuscript title] to your journal. However, I am sorry to inform you that the manuscript has already been accepted for pulication by [Insert name of journal that has accepted]

However, I would definitely look forward to publishing with your journal in the future.

Sincerely,

[Your name]

 

The irreproducibility problem is serious, but it is also misunderstood

$
0
0
Interview with Dr. Jonas Ranstam, medical statistician and winner of the Sentinels of Science award
Dr. Jonas Ranstam, a medical statistician, was the overall winner of the Sentinels of Science Awards (2016). He was affiliated with several institutions, including Lund University, Sweden, as Professor and senior lecturer of medical statistics and is currently an independent statistics consultant. He is also deputy editor of Osteoarthritis and Cartilage, statistical editor for British Journal of Surgery, a statistics consultant for Acta Orthopaedica, and a statistical reviewer for several international scientific medical journals. Dr. Ranstam also maintains the Statistical Mistakes blog, which focuses on systematic reviews of statistical mistakes in medical research and presents references to literature describing how to avoid such mistakes.

Researchers are busy people, and Dr. Jonas Ranstam is perhaps the busiest of them all. Dr. Ranstam is, officially, the world’s most prolific peer reviewer, having reviewed as many as 661 papers in a year. In 2016, this medical statistician was the overall winner of the Sentinels of Science Awards initiated by Publons to recognize the efforts of reviewers. He was also acknowledged as one of the Top Reviewers for 2016 by Publons. I feel honored for having this opportunity to talk to Dr. Ranstam about a range of topics from medical statistics to peer review.

Before retiring from being a full-time academic, Dr. Ranstam was affiliated with several institutions, including Lund University, Sweden, as Professor and senior lecturer of medical statistics. Currently, as a medical statistician, Dr. Ranstam acts as a statistical advisor to clinical and epidemiological investigators at academic and research institutions, hospitals, governmental agencies, and private companies. He also offers his expertise to Osteoarthritis and Cartilage (as deputy editor), the British Journal of Surgery (as statistical editor), and Acta Orthopaedica (as a statistics consultant), and a statistical reviewer for several international scientific medical journals. He also maintains the Statistical Mistakes blog, which focuses on systematic reviews of statistical mistakes in medical research and presents references to literature describing how to avoid such mistakes.

In this first segment of the interview, Dr. Ranstam talk about a range of topics – from statistical methodology, the blog he maintains, and the problem with disclosing the uncertainly of findings in medical research to the irreproducibility crisis. Dr. Ranstam also talks about the common mistakes researchers make when presenting statistical data in their manuscripts.

Let’s begin by talking about your current profile. What do you do as an independent statistician/consultant?

I work with medical research problems, mainly in the area of clinical treatment research. For example, I participate in the development of study design in several research projects, and I write study protocols and statistical analysis plans. I analyze data and write research reports. I also review manuscripts, grant applications, and sometimes job applications. However, in contrast to my previous job as a university professor, I have very few administrative tasks and almost no teaching.

What led you to start your blog, Statistical Mistakes?

It started with a reference list for my own use. I often include references to published papers in my review comments to facilitate learning for the authors, and I wanted to have easy access to my list. Just keeping it in a Word document was not a good alternative as I usually work with different computers and at various locations. The simplest solution turned out to be the WordPress blog system.

I didn't see it as a disadvantage that the list became public. I thought this could be useful also for others writing and reviewing manuscripts. 

I am engaged in two other blogs as well, ArthroplastyWatch represents an international collection of joint replacement safety alerts, and DRICKSVATTEN.BLOG, a national collection of local Swedish drinking water alerts. 

On your blog, you mention that medical researchers are “ignorant about statistical methodology.” How can this change? How could a medical researcher or any researcher working with data and using statistical analyses be made more aware of the problem?

Yes, that is unfortunately true. Douglas Altman once wrote [Altman DG. Statistical reviewing for medical journals. Stat Med 1998;17:2661-2674] that "the majority of statistical analyses are performed by people with an inadequate understanding of statistical methods. They are then peer reviewed by people who are generally no more knowledgeable".

The consequences of the statistical mistakes affect us all. Without them we could have had more effective treatments with fewer complications and lower costs. I believe that the main problem is that successful medical research requires understanding of stochastic phenomena, and most medical researchers tend to have a deterministic orientation.

Several attempts to improve the quality of medical research are made. The importance of statistical reviewing is, for example, considered increasingly important in many medical journals. The use of public trial registers and compliance with reporting checklists, such as CONSORT, PRISMA, and ARRIVE, has also become an integrated part of the requirements for having manuscripts accepted for publication.

During one of your presentations, you mentioned that “Many (if not all) authors severely underestimate the uncertainty of their findings.” Could you elaborate?

Medical research is mostly quantitative, i.e., it includes quantification of the finding's sampling and measurement uncertainty. This is usually measured in terms of p-values and confidence intervals. Non-significant results are often considered too uncertain to be publishable.

It is, however, possible to give the impression that the uncertainty is lower than it is, even when p-values and confidence intervals are correctly calculated. For example, hypothesis generating study results can be presented as if they had been confirmatory, and the effects of multiple testing can be ignored, or corrected for in an inadequate manner. Such inadequacies are not necessarily intentional, but the general methodological practice seems to have a tendency to produce research findings with systematically overrated empirical support. Given the importance of publishing in a "publish or perish" culture, this development should perhaps not come as a surprise.

In another presentation, you mention that journal editors are keen on publishing guidelines because guidelines generate citations. Could you please elaborate?

It has been discussed that some publication types, such as review articles and guidelines, generate more citations than other types and therefore have greater influence over a journal's impact factor. 

I don't know how well this phenomenon has been studied, but I remember that when I started my career in medical statistics, the most cited publication in medical research was Sydney Siegel's Nonparametric Statistics, a statistics textbook with guidelines on the use of distribution-free tests.

What role do data management, data storage, and data sharing play in medical statistics and biostatistics research?

My personal opinion is that the reproduction of results is important and necessary, but the discussions on open data and data sharing also seem a bit naive. Working with complex database structures and advanced statistical analyses presents many problems that shouldn't be underestimated. Mistakes and misunderstandings in a statistical reanalysis can easily discredit sound research findings. I believe that public sharing of data needs to be combined with measures to avoid such problems.

In your opinion, how big is the irreproducibility problem facing science? How can it be addressed/fixed?

The irreproducibility problem is serious, but it is also misunderstood. Scientific development relies on the questioning of established truths; to reproduce results is an important part of this, and not succeeding is not necessarily a bad thing.

I believe that it is important to label studies correctly. Many studies are exploratory; the aim is to generate hypotheses. Such studies can be well planned and performed, but they can also be fishing expeditions with results that are mere speculations. The uncertainty of these findings cannot be reliably calculated, so why should the results be reproducible?

However, also the results from confirmatory studies are uncertain but at a defined level, because they are designed and performed in a way that enables calculating the inferential uncertainty of their results. Nevertheless, a part of these results can be expected to be false and fail to reproduce.

Statistical mistakes, unfortunately, play a prominent role in many studies. Laboratory experiments, for example, often lack pre-specified endpoints and analysis plans include multiple testing with inadequate use of multiplicity correction, and are based on correlated instead of independent observations. In addition, whether or not the assumptions underlying the statistical evaluation are fulfilled is often ignored. Other, equally severe mistakes are common in epidemiological studies.

There is no simple way out of this mess, but statistical rigor is obviously necessary for a more rational use of our research resources.

In your experience as an author, reviewer, and editor, what are the most common mistakes authors make when presenting statistical data in their manuscripts? How can these mistakes be avoided?

The most common mistakes are, in my opinion, caused by the misunderstanding of p-values and statistical significance. These are measures related to uncertainty but are typically mistaken for tokens of importance.

Several recently published articles, including one from the American Statistical Association, have discussed these problems and proposed changes. One journal, BASP (Basic and Applied Social Psychology), has also banned the use of p-values and other statistical measures that form part of "null hypothesis significance testing". However, ignoring inferential uncertainty just makes the situation worse.

That brings us to the end of this segment of the interview with Dr. Jonas Ranstam. In the next segment, Dr. Ranstam will talk about peer review in scholarly publishing. Stay tuned!


Pioneer of citation analytics Eugene Garfield passes away

$
0
0
Obituary to Eugene Garfield, the pioneer of citation analytics

Eugene Garfield, pioneer in the field of citation analytics - an obituary

Eugene Garfield (1925-2017) occupies a prominent place in academia for shaping the way research was assessed globally. The pioneer in the field of citation analytics passed away on 26 February, 2017, at the age of 91.

Garfield’s remarkable career began at Columbia University in New York City where he received a Bachelor of Science degree in chemistry, after which he completed his PhD in linguistics at the University of Pennsylvania. In 1955, he founded the Institute for Scientific Information (ISI) where he developed the Science Citation Index (SCI), his biggest contribution to citation analysis. His formulation of the Journal Impact Factor (JIF/IF) took academia by storm. C. Sean Burns, an assistant professor of information science at the University of Kentucky, said, “Before [ISI’s] Web of Science, scientists and researchers had very inefficient methods for finding and tracing other scientific documents. The citation database was not just an intellectual achievement, but also an engineering achievement.

From being a scale to measure a journal’s reach, impact factor quickly became a yardstick to measure the worth of researchers and their publications. According to David Pendlebury, an analyst at Clarivate Analytics who worked with Garfield for more than 30 years, Garfield openly spoke against the misuse of impact factor. However, his influence on modern science is unparalleled. Ivan Oransky, cofounder of Retraction Watch, stated, “Regardless of what you think about the impact factor, his contribution to helping scientists in academia think about metrics […] that field basically wouldn’t exist without him.” In 1992, Thomson Reuters acquired the ISI and its citation index, and since 2016 both are being maintained and run by Clarivate Analytics. The impact factor “continues to serve as a reliable and efficient guide to the sprawling world of research,” said Clarivate Analytics.

Although he is best known for being the founding father of SCI, Garfield has some other notable achievements to his credit. He founded The Scientist in 1986, which is a renowned news magazine for researchers. Vitek Tracz, the publisher of Faculty of 1000 and a former co-owner of The Scientist said, “He was a genius of a very special type. Not only because he had this incredible imagination and brain, but he had incredible tenacity and courage.” Survived by his wife Meher, kids, granddaughters, and great-grandchildren, Garfield has left his mark on science and generations of academicians will look up to his work.  

Recommended reading:

Why you should not use the journal impact factor to evaluate research

Chasing the impact factor: Is it worth the hassle?

The advance and decline of the impact factor

How can I find out if a journal is included in the Thomson and Reuters list?

References:

Scientometrics Pioneer Eugene Garfield Dies

Citation analytics pioneer Eugene Garfield dies, aged 91

Dr. Eugene Garfield, founding father of Clarivate Analytics’ web of science, died at the age of 91

How to write methods for chronological and thematic models in a literature review?

$
0
0
Question Description: 

Please help me understand my query with possible examples. I'm a little confused. Thank you.

Answer

A chronological literature review describes each work in succession starting with the earliest available information. Typically, in the methods section of a chronological review, you will have to group together the sources in order of their publication date. For example, if the earliest available article on the topic dates back  to 1991, you can arrange the sources in three groups: information available from 1991-2000, from 2001-2010, and 2011-the present.

This structure is generally used when the focus is to show how ideas or methodology have progressed over time. For instance, a literature review that focusses on skin cancer in teens could possibly be structured in a chronological manner by examining the earliest methods of diagnosis and treatment, and gradually progressing to the latest models and treatment.

In a thematic literature review, the author organizes and discusses existing literature based on themes or theoretical concepts he or she feels are important to understanding the topic. For instance, an author writing a literature review on skin cancer in teens using this approach would possibly include separate sections on studies about melanoma and non-melanoma skin cancer, tanning as a cause of skin cancer, teenager awareness and attitudes to skin cancer, and treatment models.

 

What is the meaning of "Error: DOI not found" ?

$
0
0
Question Description: 

Dear Dr.Eddy, My manuscript got accepted and published online. But when I press on the DOI link for this paper, I get a message saying "Error DOI not found." My question is, what are the reasons for this and is there a problem in the published paper? If there is a problem, how can I fix it?

Answer

The DOI website provides detailed information about error messages. According to the information provided, it is possible that this error has occurred because the DOI mentioned in your published article is incorrect. You should check the article carefully and if there is a mistake, inform the journal editor and request them to correct it. Another possible reason is that you have copied the DOI incorrectly from the source. Check that the thread includes all the characters before and after the slash and that you have not included any extra punctuation marks. The third possible reason could be that the DOI has not been activated yet. To check this, you can check again at a later time, and if it still does not work, report the problem.

If you cannot identify the reason for the problem, and if the error persists, you can write to doi-help@doi.org.

 

What can I do if my journal editor has not yet read my submission email?

$
0
0
Question Description: 

I have submitted my paper two weeks ago via email. Recently when I checked my email box, I saw that my email has not yet been read (still marked "unread"). So I am now concerned about this and thinking about what I should do. Should I wait or send one more email for checking whether the journal received my email or not. 

Answer

Although most journals have a dedicated journal submission system, some journals continue to use email for managing submissions, and your target journal might be among the latter. Since journal editors receive a lot of submissions, it is difficult to keep track of the constant inflow of emails from authors who may be at different stages of publication. It appears as though you requested a notification when the editor read your email and you haven't received the notification yet. There is a possibility that your email has not yet been read by the editor or it has been delivered to the Junk or Spam folder. There are two things you can do.

Check the journal website for any information about their response time after first submission. It is likely that the journal generally takes longer to get back to all authors. If you find no information, wait for another day or two and then write another email following up about the status of your submission. Say that you're just writing to check that the submission has been received because you did not receive an acknowledgement. Forward the previous email you sent so that the journal editor can see your earlier communication. Make sure you write a clear subject line, for example, Re: The submission of my paper titled "the title of your paper" on <mention the date>  It might also be a good idea to check whether the journal has an online system as well. If yes, why don't you submit your paper via the online system? While doing so, be sure to mention that you are submitting your paper via the online system because you have already sent it via email (mention the date of your email) and received no response/acknowledgement from the journal.

Can I submit two papers on very similar topics to the same journal?

$
0
0
Question Description: 

I'm currently in the process of submitting two different papers. The two papers are on the same topic but are slightly different. I am the corresponding author for both although the first authors are different in each case. However, I feel like this is not right and it can be a disadvantage. In this case, would it be better to submit my papers to two different journals or can they be submitted to the same journal? 

Answer

Publishing two papers on the same or a very similar topic can be considered unethical unless the two studies are completely different in content and focus. If the two papers have very similar or overalapping content, they might be considered duplicate submission or salami slicing.

Duplicate submission refers to the practice of submitting two very similar papers to two journals and trying to publish them as two papers when one paper is sufficient.  Salami slicing refers to the practice of partitioning a large study that could have been reported in a single research article into smaller published articles. A set of papers are referred to as salami publications when more than one paper covers the same data, methods, and research question. Both of these are considered unethical and the journal can take action against you if detected.

Therefore, you should be very careful about publishing two papers on the same topic. The data and methods section can be similar, but you have to make sure that you have a new research question and that all the other sections of the paper - the literature review, discussion, analysis of the findings - are completely different. If you feel that the two papers are very similar or have a lot of overlapping content, then it would be better to not publish the second paper at all or publish it as a second part or a follow-up study. You can send a pre-submission inquiry to the editor, giving a short summary of both the papers and asking if they can be published as two separate studies or if the second one can be published as follow up of the previous study.

Recommended reading:

What can be the reason behind two papers by the same author having similar content?

Is it ethical to submit a paper to two or more journals simultaneously?

Is publishing the same paper in different languages duplication?

 

Viewing all 4754 articles
Browse latest View live