By Penny Hoffmann
When we think of academia, we would like to believe that it is completely factual as much of the findings from academia are eventually opened for public view. However, academia, just like other organisations, faces its own problems.
Richard Dawkins, a well-known evolutionary biologist who is particularly famous for his commentaries on religion, states in his famous book “The Selfish Gene” that students and colleagues may be the backbone for building the reputation of senior scientists:
“I recently learned a disagreeable fact: there are influential scientists in the habit of putting their names to publications in whose composition they have played no part. Apparently some senior scientists claim joint authorship of a paper when all they have contributed is bench space, grant money and an editorial read-through of the manuscript. For all i know, entire scientific reputations may have been built on the work of students and colleagues!”
According to a research article titled “The natural selection of bad science”, authored by Paul E. Smaldino and Richard McElreath, misuse of procedures and methods remain both common and normative. The remainder of quotes are from the research article:
“In March 2016, the American Statistical Association published a set of corrective guidelines about the use and misuse of p-values. Statisticians have been publishing guidelines of this kind for decades. Beyond mere significance testing, research design in general has a history of shortcomings and repeated corrective guidelines. Yet misuse of statistical procedures and poor methods has persisted and possibly grown. In fields such as psychology, neuroscience and medicine, practices that increase false discoveries remain not only common, but normative.”
Many prominent UK researchers suggest that there is an increase in “fatal errors and retractions” in “as much as half of the scientific literature… especially of prominent publications”:
“In April 2015, members of the UK’s science establishment attended a closed-door symposium on the reliability of biomedical research. The symposium focused on the contemporary crisis of faith in research. Many prominent researchers believe that as much as half of the scientific literature—not only in medicine, by also in psychology and other fields—may be wrong. Fatal errors and retractions, especially of prominent publications, are increasing.”
Additionally, governments can intervene the production of papers by means of adding or redacting findings, contributor names, and so on, in order to suit an agenda. Academic publications can be affected by the politics of the scientist’s location. For example, papers may be publicly inaccessible in a nation because the contents threaten propaganda that is used to control the public’s perceptions.
Institutional incentives tend to favor poor research methods and abuse of statistical procedures:
“When researchers are rewarded primarily for publishing, then habits which promote publication are naturally selected.”
When in a competitive industry such as science in academia, praise from professors and the like is often sought after. Thus, because of human error, publishing as many findings as possible and simultaneously being accurate can be a difficult balance to maintain:
“‘Scientists are human and will therefore respond (consciously or unconsciously) to incentives; when personal success (e.g. promotion) is associated with the quality and (critically) the quantity of publications produced, it makes more sense to use finite resources to generate as many publications as possible’.”
Frederik Anseel, a professor at King’s College London, states that “there is probably a serious problem with mental health in academia” and that it “probably has something to do with how academia is organized as an industry, how we train people, how we manage people, and how careers develop”. The overly competitive field, deadlines and isolation in terms of social lives would have some bearing on the mental health of people in academia.
In a ScienceMag article, Arnav Chhabra, a grad student at Harvard-MIT Health Sciences and Technology in Cambridge, Massachusetts, details the difficulty of maintaining a healthy balance of a personal and work life:
“In my third year of grad school, everything seemed to fall apart. I was dealing with my grandmother’s death, and then my girlfriend and I broke up. I spent the following year in a painful feedback loop of depression and despair. Every day, I would trudge into lab and try to get excited about my projects. But when I encountered minor hurdles such as a failed replication or contaminated samples, I would become discouraged and give up. Even when my experiments went smoothly, I felt guilty about the time I had wasted being unproductive. I knew I was struggling, but I didn’t ask for help. I thought I could deal with my state of mind just as I had dealt with every other problem in my life: Bottle up my emotions, attack the problem with logic, and iterate until I arrived at a solution.”
Another problem that Smaldino and McElreath revealed is that sample sizes, which aid in drawing statistically sound conclusions, have not increased in size in the last 50 years.