Essential insights from Hacker News discussions

Scientific Papers: Innovation or Imitation?

Publish or Perish: The Pressure to Publish Inhibits Innovation

A central theme in the discussion is the detrimental effect of the "publish or perish" culture on genuine innovation in academia. The relentless pressure to publish frequently leads researchers to prioritize quantity over quality, focusing on incremental, low-risk studies rather than groundbreaking, potentially time-consuming investigations.

  • "In my experience, the publication pressure in today's science is to large extent inhibiting innovation. How can you innovate when you need to have X papers every year, otherwise you will not get that position of funding. To fulfill the quota, the only rational strategy is to focus on simple iterative papers that are very similar to what everybody else is doing. There is simply no time to innovate or be brave, you have to comfort" - empiko
  • "I completely agree that 'publish or perish' harms innovation. Funding and research positions have become so predicated on rapid and consistent publication that it incentives researchers to focus on incremental and generally low-risk ideas that they can propose, develop, and publish quickly and predictably. Nobody has the time or energy anymore to focus on bigger and braver (your word) ideas that are less incremental and cannot be developed in predictable time frames." - atrettel
  • "Too much imitation delays innovation." - agarttha

Gaming the System: Metrics and Faddishness

Participants point out that the current evaluation system relies heavily on metrics like publication count, citations, and impact factors, which are easily gamed. This incentivizes researchers to produce more publications, regardless of their actual contribution to the field. This leads to faddishness and questionable research practices.

  • "Profs who want to be seen as productive and who want good funding publish 30-50 papers per year and sometimes 'supervise' dozens of PhD students at the same time (who agree to the deal to get the brand name of the big prof, not for any real supervision). Funding agencies can't evaluate the research itself, so they look at numbers, metrics, impact factors, citations, h-index, publication count etc. They can't simply say 'we pay this academic whether he publishes or not because we trust he is still deep in important work when he is not at a work stage to publish' because people will suspect fraud and nepotism and bias, and often the funding is taxpayer money. " - bonoboTP
  • "Some papers seemed to be near duplicates of prior work by the same academic, with minor modification. Papers featuring the latest buzz technologies regardless of whether they were appropriate. Some senior academics would get their name included on publications regardless of whether they had been actively engaged with that project." - Daub
  • "I've seen faddishnes and questionable authorship in a top-3 Japan university too. The lab I was in was a paper mill, the professor even explicitly told student than quantity > quality." - tokinonagare

Specialization vs. Convergence in Fields like AI

Several comments focus on the specific challenges within fields like AI, where rapid advancements and a large influx of researchers have created an intensely competitive environment. The convergence of methods and the ease of building on existing research have made it more difficult to establish a niche and pursue in-depth, undisturbed research.

  • "And then you have a subfield with giant conferences, a lot of money, and a lot of people doing similar things." - jltsiren
  • "It used to be that way in AI a decade ago. Different subfields used bespoke methods you could specialize in and could take a fairly undisturbed 3-5 years to work on it without constant worries of being scooped and therefore having to rush to publish something half baked to plant flags. Nowadays methods are converging, it's comparatively less useful to be an expert in some narrow application area, since the standard ML methods work quite well for such a broad range of uses (see the bitter lesson). " - bonoboTP
  • "This definitely can't go on forever and there will be a massive reality check in academia (of AI/ML)." - bonoboTP

Replication Crisis and the Value of Incremental Work

A nuanced point is raised regarding the potential, albeit perhaps unintentional, value of follow-up papers that extend or expand on original findings, particularly in light of the replication crisis. However, others are skeptical, highlighting the prevalence of minor variations on established work that offer little real advancement.

  • "Follow-up papers by other authors which โ€œonly extend or expand on the specific finding in very minor waysโ€ have a secondary benefit. In addition to expanding the original findings, they are also implicitly replicating the original result. This is perhaps a crucial contribution in light of the replication crisis!" - kevinventullo
  • "If only. I worked in cog/neuro sci, and the career builders there produce small variations on the original. Variations on the Stroop task, which dates back to 1935(!), are still being published, despite the fact that there is no explanation for the effect. And when you consider that null results are rarely published, and that many aspects of the methodology are flawed, a new paper cannot be considered a replication: it's just wishful thinking upon wishful thinking." - tgv

Funding and Grant Money Influence Research

External funding, particularly grant money, wields significant influence over research directions. The increasing administrative burdens associated with securing grants, coupled with the trend of funding agencies focusing on specific, often pre-determined topics, further constrains innovation.

  • "So much of academic life revolves around bringing in grant money. This is particularly true in STEM fields and at the best research schools. There are ever increasing administrative hoops to jump through to bring in that grant money. And grants nowadays are often given out for research on very specific topics often chosen by bureaucrats. These topics are, almost by definition, not innovative." - kj4211cash
  • "For all the emphasis on high risk research, the system doesn't reward it. The VC world may understand that only 1/100 projects will be novel and perhaps successful, but funding agencies don't" - agarttha

Questioning Peer Review

The discussion brings up concerns about the reliability of peer review:

  • "Establish peer review as the metric of tenure and make dam sure those peers know their stuff." - Daub
  • "And you are back at square one: peer reviews become the currency used in academic politics. A relatively small group of tenured academics have all the incentives to independently form a fiefdom. Anonymization does not help as everyone knows work and papers of the rest anyway." - friendzis

Potential Solutions and a Call for a Shift in Values

While the discussion primarily focuses on the problems, there's an implicit call for a shift in values, encouraging a redefinition of what it means to be a productive scientist. This includes valuing creativity, rewarding high-risk research, and recognizing the importance of negative results and failed experiments. There's also a suggestion to explore alternative metrics and evaluation methods that don't solely rely on publication counts and citations. More broadly, it touches on the need for funding agencies to adopt a more tolerant approach to failure, similar to the VC world, acknowledging that truly novel research often carries a higher risk of unsuccessful outcomes.