Tag Archives: peer review

The case for Open Research: solutions?

This series arguing the case for Open Research has to date looked at some of the issues in scholarly communication today. Hyperauthorship, HARKing, the reproducibility crisis, a surge in retractions all stem from the requirement that researchers publish in high impact journals. The series has also looked at the invalidity of the impact factor and issues with peer review.

This series is one of an increasing cacophony of calls to move away from this method of rewarding researchers. Richard Smith noted in a recent BMJ blog criticising the current publication in journal system: “The whole outdated enterprise is kept alive for one main reason: the fact that employers and funders of researchers assess researchers primarily by where they publish. It’s extraordinary to me and many others that the employers, mainly universities, outsource such an important function to an arbitrary and corrupt system.”

Universities need to open research to ensure academic integrity and adjust to support modern collaboration and scholarship tools, and begin rewarding people who have engaged in certain types of process rather than relying on traditional assessment schemes. This was the thrust of a talk in October last year”Openness, integrity & supporting researchers“. If nothing else, this approach makes ‘nightmare scenarios’ less likely. As Prof Tom Cochrane said in the talk, the last thing an institution needs is to be on the front page because of a big fraud case. 

What would happen if we started valuing and rewarding other parts of the research process? This final blog in the series looks at opening up research to increase transparency. The argument suggests we need to move beyond rewarding only the journal article – and not only other research outputs, such as data sets but research productivity itself.

So, let’s look at how opening up research can address some of the issues raised in this series.

Rewarding study inception

In his presentation about HARKing (Hypothesising After the Results are Known) at FORCE2016 Eric Turner, Associate Professor OHSU suggested that what matters is the scientific question and methodological rigour. We should be emphasising not the study completion but study inception before we can be biased by the results.  It is already a requirement to post results of industry sponsored research in ClinicalTrials.gov – a registry and results database of publicly and privately supported clinical studies of human participants conducted around the world. Turner argues we should be using it to see the existence of studies.  He suggested reviews of protocols should happen without the results (but not include the methods section because this is written after the results are known).

There are some attempts to do this already. In 2013 Registered Reports was launched: “The philosophy of this approach is as old as the scientific method itself: If our aim is to advance knowledge then editorial decisions must be based on the rigour of the experimental design and likely replicability of the findings – and never on how the results looked in the end.” The proposal and process is described here. The guidelines for reviewers and authors are here, including the requirement to “upload their raw data and laboratory log to a free and publicly accessible file-sharing service.”

This approach has been met with praise by a group of scientists with positions on more than 100 journal editorial boards, who are “calling for all empirical journals in the life sciences – including those journals that we serve – to offer pre-registered articles at the earliest opportunity”. The signatories noted “The aim here isn’t to punish the academic community for playing the game that we created; rather, we seek to change the rules of the game itself.” And that really is the crux of the argument. We need to move away from the one point of reward.

Getting data out there

There is definite movement towards opening research. In the UK there is now a requirement from most funders that the data underpinning research publications are made available. Down under, the Research Data Australia project is a register of data from over 100 institutions, providing a single point to search, find and reuse data. The European Union has an Open Data Portal.

Resistance to sharing data amongst the research community is often due to the idea that if data is released with the first publication then there is a risk that the researcher will be ‘scooped’ before they can get those all-important journal articles out. In response to this query during a discussion with the EPSRC it was pointed out that the RCUK Common Principles state that those who undertake Research Council funded work may be entitled to a limited period of privileged use of the data they have collected to enable them to publish the results of their research. However, the length of this period varies by research discipline.

If the publication of data itself were rewarded as a ‘research output’ (which of course is what it is), then the issue of being scooped becomes moot. There have been small steps towards this goal, such as a standard method of citing data.

A new publication option is Sciencematters, which allows researchers to submit observations which are subjected to triple-blind peer review, so that the data is evaluated solely on its merits, rather than on the researcher’s name or organisation. As they indicate “Standard data, orphan data, negative data, confirmatory data and contradictory data are all published. What emerges is an honest view of the science that is done, rather than just the science that sells a story”.

Despite the benefits of having data available there are some vocal objectors to the idea of sharing data. In January this year a scathing editorial in the New England Journal of Medicine suggested that researchers who used other people’s data were ‘research parasites’. Unsurprisingly this position raised a small storm of protest (an example is here). This was so sustained that four days later a clarification was issued, which did not include the word ‘parasites’.

Evaluating & rewarding data

Ironically, one benefit of sharing data could be an improvement to the quality of the data itself. A 2011 study into why some researchers were reluctant to share their data found this to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.

Professor Marcus Munafo in his presentation at the Research Libraries UK conference held earlier this year suggested that we need to introduce quality control methods implicitly into our daily practice. Open data is a very good step in that direction. There is evidence that researchers who know their data is going to be made open are more thorough in their checking of it. Maybe it is time for an update in the way we do science – we have statistical software that can run hundreds of analysis, and we can do text and data mining of lots of papers. We need to build in new processes and systems that refine science and think about new ways of rewarding science.

So should researchers be rewarded simply for making their data available? Probably not, some kind of evaluation is necessary. In a public discussion about data sharing held at Cambridge University last year, there was the suggestion that rather than having the formal peer review of data, it would be better to have an evaluation structure based on the re-use of data – for example, valuing data which was downloadable, well-labelled and re-usable.

Need to publish null results

Generally, this series looking at the case for Open Research has argued that the big problem is the only thing that ‘counts’ is publication in high impact journals. So what happens to all the results that don’t ‘find’ anything?

Most null results are never published with a study in 2014 finding that of 221 sociological studies conducted between 2002 and 2012, only 48% of the completed studies had been published. This is a problem because not only is the scientific record inaccurate, it means  the publication bias “may cause others to waste time repeating the work, or conceal failed attempts to replicate published research”.

But it is not just the academic reward system that is preventing the widespread publication of null results – the interference of commercial interests on the publication record is another factor. A recent study looked into the issue of publication agreements – and whether a research group had signed one prior to conducting randomised clinical trials for a commercial entity. The research found that  70% of protocols mentioned an agreement on publication rights between industry and academic investigators; in 86% of those agreements, industry retained the right to disapprove or at least review manuscripts before publication. Even more concerning was  that journal articles seldom report on publication agreements, and, if they do, statements can be discrepant with the trial protocol.

There are serious issues with the research record due to selected results and selected publication which would be ameliorated by the requirement to publish all results – including null results.

There are some attempts to address this issue. Since June 2002 the Journal of Articles in Support of the Null Hypothesis has been published bi-annually. The World Health Organisation has a Statement on the Public Disclosure of Clinical Trial Results, saying: “Negative and inconclusive as well as positive results must be published or otherwise made publicly available”. A project launched in February last year by PLOS ONE is a collection focusing on negative, null and inconclusive results. The Missing Pieces collection had 20 articles in it as of today.

In January this year there were reports that a group of ten editors of management, organisational behaviour and work psychology research had pledged they would publish the results of well-conceived, designed, and conducted research even if the result was null.  The way this will work is the paper is presented without results or discussion first and it is assessed on theory, methodology, measurement information, and analysis plan.

Movement away from using the impact factor

As discussed in the first of this series of blogs ‘The mis-measurement problem‘, we have an obsession with high impact journals. These blogs have been timely, falling as they have within what seems to be a plethora of similarly focused commentary. An example is a recent Nature news story by Mario Biagioli, who argued the focus on impact of published research has created new opportunities for misconduct and fraudsters. The piece concludes that “The audit culture of universities — their love affair with metrics, impact factors, citation statistics and rankings — does not just incentivize this new form of bad behaviour. It enables it.”

In recent discussion amongst the Scholarly Communication community about this mis-measurement the suggestion that we can address the problem by limiting the number of articles that can be submitted for promotion was raised. This ideally reduces the volume of papers produced overall, or so the thinking goes. Harvard Medical School and the Computing Research Association “Best Practices Memo” were cited as examples by different people.

This is also the approach that has been taken by the Research Excellence Framework in the UK – researchers put forward their best four works from the previous period (typically about five years). But it does not prevent poor practice. Researchers are constantly evaluated for all manner of reasons. Promotion, competitive grants, tenure, admittance to fellowships are just a few of the many environments a researcher’s publication history will be considered.

Are altmetrics a solution? There is a risk that any alternative indicator becomes an end in itself. The European Commission now has an Open Science Policy Platform, which, amongst other activities has recently established an expert group to advise on the role of metrics and altmetrics in the development of its agenda for open science and research.

Peer review experiments

Open peer review is where peer review reports identify the reviewers and are published with the papers.  One of the more recent publishers to use this method of review is the University of California Press’ open access mega journal called Collabra, launched last year. In an interview published by Richard Poynder, UC Press Director Alison Mudditt notes that there are many people who would like to see more transparency in the peer review process. There is some evidence to show that identifying reviewers results in more courteous reviews.

PLOS One publishes work after an editorial review process which does not include potentially subjective assessments of significance or scope to focus on technical, ethical and scientific rigor. Once an article is published readers are able to comment on the work in an open fashion.

One solution could be that used by CUP journal JFM Rapids, which has a ‘fast-track’ section of the journal offering fast publication for short, high-quality papers. This also operates a policy whereby no paper is reviewed twice, thus authors must ensure that their paper is as strong as possible in the first instance. The benefit is it offers a fast turnaround time while reducing reviewer fatigue.

There are calls for post publication peer review, although some attempts to do this have been unsuccessful, there are arguments that it is simply a matter of time – particularly if reviewers are incentivised. One publisher that uses this system is the platform F1000Research which publishes work immediately and invites open post-publication review. And, just recently, Wellcome Open Research was launched using services developed by F1000Research. It will make research outputs available faster and in ways that support reproducibility and transparency. It uses an open access model of immediate publication followed by transparent, invited peer review and inclusion of supporting data.

Open ways of conducting research

All of these initiatives demonstrate a definite movement towards an open way of doing research by addressing aspects of the research and publication process. But there are some research groups that are taking a holistic approach to open research.

Marcus Munafo published last month a description of the experience the UK Center for Tobacco and Alcohol Studies and the MRC Integrative Epidemiology Unit at the University of Bristol over the past few years of attempting to work within an Open Science Model focused on three core areas:  study protocols, data, and publications.

Another example is the Open Source Malaria project which includes researchers and students using open online laboratory notebooks from around the world including Australia, Europe and North America. Experimental data is posted online each day, enabling instant sharing and the ability to build on others’ findings in almost real time. Indeed, according to their site ‘anyone can contribute’. They have just announced that undergraduate classes are synthesising molecules for the project. This example fulfils all of the five basic principles of open research suggested here.

The Netherlands Organisation for Scientific Research (NWO) has just announced that it is making 3 million euros available for a Replication Studies pilot programme. The pilot will concentrate on the replication of social sciences, health research and healthcare innovation studies that have a large impact on science, government policy or the public debate. The intention after this study will be to “include replication research in an effective manner in all of its research programmes”.

A review of literature published this week has demonstrated that open research is associated with increases in citations, media attention, potential collaborators, job opportunities and funding opportunities. These findings are evidence, the authors say,  “that open research practices bring significant benefits to researchers relative to more traditional closed practices”.

This series has been arguing that we should move to Open Research as a way of changing the reward system that bastardises so much of the scientific endeavour. However there may be other benefits according to a recently published opinion piece which argues that Open Science can serve a different purpose to “help improve the lot of individual working scientists”.

Conclusion

There are clearly defined problems within the research process that in the main stem from the need to publish in  high impact journals. Throughout this blog there are multiple examples of initiatives and attempts to provide alternative ways of working and publishing.

However, all of this effort will only succeed if those doing the assessing change the rules of the game. This is tricky. Often the people who have succeeded have some investment in the status quo remaining. We need strong and bold leadership to move us out of this mess and towards a more robust and fairer future. I will finish with a quote that has been attributed to Mark Twain, Einstein and Henry Ford. “If you always do what you’ve always done, you’ll always get what you’ve always got”. It really is up to us.

Published 2 August 2016
Written by Dr Danny Kingsley
Creative Commons License

The case for Open Research: does peer review work?

This is the fourth in a series of blog posts on the Case for Open Research, this time looking at issues with peer review. The previous three have looked at the mis-measurement problem, the authorship problem and the accuracy of the scientific record. This blog follows on from the last and asks – if peer review is working why are we facing issues like increased retractions and the inability to reproduce considerable proportion of the literature? (Spoiler alert – peer review only works sometimes.)

Again, there is an entire corpus of research behind peer review, this blog post merely scrapes the surface. As a small indicator, there has been a Peer Review Congress held every four years for the past thirty years (see here for an overview). Readers might also be interested in some work I did on this published as The peer review paradox – An Australian case study.

There is a second, related post published with this one today. Last year Cambridge University Press invited a group of researchers to discuss the topic of peer review – the write-up is here.

An explainer

What is peer review? Generally, peer review is the process by which research submitted for publication is overseen by colleagues who have expertise in the same or similar field before publication. Peer review is defined as having several purposes:

  • Checking the work for ‘soundness’
  • Checking the work for originality and significance
  • Determining whether the work ‘fits’ the journal
  • Improving the paper

Last year, during peer review week the Royal Society hosted a debate on whether peer review was fit for purpose. The debate found that in principle peer review is seen as a good thing, but the implementation is sometimes concerning. A major concern was the lack of evidence of the effectiveness of the various forms of peer review.

Robert Merton in his seminal 1942 work The Normative Structure of Science described four norms of science*. ‘Organised scepticism’ is the norm that scientific claims should be exposed to critical scrutiny before being accepted.  How this has manifested has changed over the years. Refereeing in its current form, as an activity that symbolises objective judgement of research is a relatively new phenomenon – something that has only taken hold since the 1960s.  Indeed, Nature was still publishing some unrefereed articles until 1973.

(*The other three norms are ‘Universalism’ – that anyone can participate, ‘Communism’ – that there is common ownership of research findings and ‘Disinterestedness’ – that research is done for the common good, not private benefit. These are an interesting framework with which to look at the Open Access debate, but that is another discussion.)

Crediting hidden work

The authorship blog in this series  looked at credit for contribution to a research project, but the academic community contributes to the scholarly ecosystem in many ways.  One of the criticisms of peer review is that it is ‘hidden’ work that researchers do. Most peer review is ‘double blind’ – where the reviewer does not know  the name of the author and the author does not know who is reviewing the work. This makes it very difficult to quantify who is doing this work.  Peer review and journal editing is a huge tranche of unpaid work that academics contributions to research.

One of the issues with peer review is the sheer volume of articles being submitted for publication each year. A 2008 study  ‘Activities, costs and funding flows in the scholarly communications system‘ estimated the global unpaid non-cash cost of peer review as £1.9 billion annually.

There has been some call to try and recognise peer review in some way as part of the academic workflow. In January 2015 a group of over 40 Australian Wiley editors sent an open letter Recognition for peer review and editing in Australia – and beyond?  to their universities, funders, and other research institutions and organisations in Australia, calling for a way to reward the work. In September that year in Australia,  Mark Robertson, publishing director for Wiley Research Asia-Pacific, said “there was a bit of a crisis” with peer reviewing, with new approaches needed to give peer reviewers appropriate recognition and encourage ­institutions to allow staff to put time aside to review.

There are some attempts to do something about this problem. A service called Publons is a way to ‘register’ the peer review a researcher is undertaking. There have also been calls for an ‘R index’ which would give citable recognition to reviewers. The idea is to improve the system by both encouraging more participation and providing higher quality, constructive input, without the need for a loss of anonymity.

Peer review fails

The secret nature of peer review means it is also potentially open to manipulation. An example of problematic practices is peer review fraud. A recurrent theme throughout discussions on peer review at this year’s Researcher 2 Reader conference (see the blog summary here) was that finding and retaining peer reviewers was a challenge that was getting worse. As the process of obtaining willing peer reviewers becomes more challenging, it is not uncommon for the journal to ask the author to nominate possible reviewers.  However  this can lead to peer review ‘fraud’ where the nominated reviewer is not who they are meant to be which means the articles make their way into the literature without actual review.

In August 2015 Springer was forced to retract 64 articles from 10 journals, ‘after editorial checks spotted fake email addresses, and subsequent internal investigations uncovered fabricated peer review reports’.  They concluded the peer review process had been ‘compromised’.

In November 2014, BioMed Central uncovered a scam where they were forced to retract close to 50 papers because of fake peer review issues. This prompted BioMed Central to produce the blog ‘Who reviews the reviewers?’ and Nature writing a story on Publishing: the peer review scam.

In May 2015 Science  retracted a paper because the supporting data was entirely fabricated. The paper got through peer review because it had a big name researcher on it. There is a lengthy (but worthwhile) discussion of the scandal here. The final clue was getting hold of a closed data set  that: ‘wasn’t a publicly accessible dataset, but Kalla had figured out a way to download a copy’. This is why we need open data, by the way …

But is peer review itself the problem here? Is this all not simply the result of the pressure on the research community to publish in high impact journals for their careers?

Conclusion

So at the end of all of this, is peer review ‘broken’? Yes according to a study of 270 scientists worldwide published last week. But in a considerably larger study published last year by Taylor and Francis showed an enthusiasm for peer review. The white paper Peer review in 2015: a global view,  which gathered “opinions from those who author research articles, those who review them, and the journal editors who oversee the process”. It found that researchers value the peer review process.  Most respondents agreed that peer review greatly helps scholarly communication by testing the academic rigour of outputs. The majority also reported that they felt the peer review process had improved the quality of their own most recent published article.

Peer review is the ‘least worst’ process we have for ensuring that work is sound. Generally the research community require some sort of review of research, but there are plenty of examples that our current peer review process is not delivering the consistent verification it should. This system is relatively new and it is perhaps time to look at shifting the nature of peer review once more. On option is to open up peer review, and this can take many forms. Identifying reviewers, publishing reviews with a DOI so they can be cited, publishing the original submitted article with all the reviews and the final work, allowing previous reviews to be attached to the resubmitted article are all possibilities.

Adopting  one or all of these practices benefits the reviewers because it exposes the hidden work involved in reviewing. It can also reduce the burden on reviewers by minimising the number of times a paper is re-reviewed (remember the rejection rate of some journals is up to 95% meaning papers can get cascaded and re-reviewed multiple times).

This is the last of the ‘issues’ blogs in the case for Open Research series. The series will turn its attention to some of the solutions now available.

Published 19 July 2016
Written by Dr Danny Kingsley
Creative Commons License

‘It is all a bit of a mess’ – observations from Researcher to Reader conference

“It is all a bit of a mess. It used to be simple. Now it is complicated.” This was the conclusion of Mark Carden, the coordinator of the Researcher to Reader conference after two days of discussion, debate and workshops about scholarly publication..

The conference bills itself as: ‘The premier forum for discussion of the international scholarly content supply chain – bringing knowledge from the Researcher to the Reader.’ It was unusual because it mixed ‘tribes’ who usually go to separate conferences. Publishers made up 47% of the group, Libraries were next with 17%, Technology 14%, Distributors were 9% and there were a small number of academics and others.

In addition to talks and panel discussions there were workshop groups that used the format of smaller groups that met three times and were asked to come up with proposals. In order to keep this blog to a manageable length it does not include the discussions from the workshops.

The talks were filmed and will be available. There was also a very active Twitter discussion at #R2RConf.  This blog is my attempt to summarise the points that emerged from the conference.

Suggestions, ideas and salient points that came up

  • Journals are dead – the publishing future is the platform
  • Journals are not dead – but we don’t need issues any more as they are entirely redundant in an online environment
  • Publishing in a journal benefits the author not the reader
  • Dissemination is no longer the value added offered by publishers. Anyone can have a blog. The value-add is branding
  • The drivers for choosing research areas are what has been recently published, not what is needed by society
  • All research is generated from what was published the year before – and we can prove it
  • Why don’t we disaggregate the APC model and charge for sections of the service separately?
  • You need to provide good service to the free users if you want to build a premium product
  • The most valuable commodity as an editor is your reviewer time
  • Peer review is inconsistent and systematically biased.
  • The greater the novelty of the work the greater likelihood it is to have a negative review
  • Poor academic writing is rewarded

Life After the Death of Science Journals – How the article is the future of scholarly communication

Vitek Tracz, the Chairman of the Science Navigation Group which produces the F1000Research series of publishing platforms was the keynote speaker. He argued that we are coming to the end of journals. One of the issues with journals is that the essence of journals is selection. The referee system is secret – the editors won’t usually tell the author who the referee is because the referee is working for the editor not the author. The main task of peer review is to accept or reject the work – there may be some idea to improve the paper. But that decision is not taken by the referees, but by the editor who has the Impact Factor to consider.

This system allows for information to be published that should not be published – eventually all publications will find somewhere to publish. Even in high level journals many papers cannot be replicated. A survey by PubMed found there was no correlation between impact factor and likelihood of an abstract being looked at on PubMed.

Readers can now get papers they want by themselves and create their own collections that interest them. But authors need journals because IF is so deeply embedded. Placement in a prestigious journal doesn’t increase readership, but it does increase likelihood of getting tenure. So authors need journals, readers don’t.

Vitek noted F1000Research “are not publishers – because we do not own any titles and don’t want to”. Instead they offer tools and services. It is not publishing in the traditional sense because there is no decision to publish or not publish something – that process is completely driven by authors. He predicted this will be the future of science publishing will shift from journals to services (there will be more tools & publishing directly on funder platforms).

In response to a question about impact factor and author motivation change, Vitek said “the only way of stopping impact factors as a thing is to bring the end of journals”. This aligns with the conclusions in a paper I co-authored some years ago. ‘The publishing imperative: the pervasive influence of publication metrics’

Author Behaviours

Vicky Williams, the CEO of research communications company Research Media discussed “Maximising the visibility and impact of research” and talked abut the need to translate complex ideas in research into understandable language.

She noted that the public does want to engage with research. A large percentage of public want to know about research while it is happening. However they see communication about research is poor. There is low trust in science journalism.

Vicki noted the different funding drivers – now funding is very heavily distributed. Research institutions have to look at alternative funding options. Now we have students as consumers – they are mobile and create demand. Traditional content formats are being challenged.

As a result institutions are needing to compete for talent. They need to build relationships with industry – and promotion is a way of achieving that. Most universities have a strong emphasis on outreach and engagement.

This means we need a different language, different tone and a different medium. However academic outputs are written for other academics. Most research is impenetrable for other audiences. This has long been a bugbear of mine (see ‘Express yourself scientists, speaking plainly isn’t beneath you’).

Vicki outlined some steps to showcase research – having a communications plan, network with colleagues, create a lay summary, use visual aids, engage. She argued that this acts as a research CV.

Rick Anderson, the Associate Dean of the University of Utah talked about the Deeply Weird Ecosystem of publishing. Rick noted that publication is deeply weird, with many different players – authors (send papers out), publishers (send out publications), readers (demand subscriptions), libraries (subscribe or cancel). All players send signals out into the school communications ecosystem, when we send signals out we get partial and distorted signals back.

An example is that publishers set prices without knowing the value of the content. The content they control is unique – there are no substitutable products.

He also noted there is a growing provenance of funding with strings. Now funders are imposing conditions on how you want to publish it not just the narrative of the research but the underlying data. In addition the institution you work for might have rules about how to publish in particular ways.

Rick urged authors answer the question ‘what is my main reason for publishing’ – not for writing. In reality it is primarily to have high impact publishing. By choosing to publish in a particular journal an author is casting a vote for their future. ‘Who has power over my future – do they care about where I publish? I should take notice of that’. He said that ‘If publish with Elsevier I turn control over to them, publishing in PLOS turns control over to the world’.

Rick mentioned some journal selection tools. JANE is a system (oriented to biological sciences) where authors can plug in abstract to a search box and it analyses the language and comes up with suggested list of journals. The Committee on Publication Ethics (COPE) member list provides a ‘white list’ of publishers. Journal Guide helps researchers select an appropriate journal for publication.

A tweet noted that “Librarians and researchers are overwhelmed by the range of tools available – we need a curator to help pick out the best”.

Peer review

Alice Ellingham who is Director of Editorial Office Ltd which runs online journal editorial services for publishers and societies discussed ‘Why peer review can never be free (even if your paper is perfect)’. Alice discussed the different processes associated with securing and chasing peer review.

She said the unseen cost of peer review is communication, when they are providing assistance to all participants. She estimated that per submission it takes about 45-50 minutes per paper to manage the peer review. 

Editorial Office tasks include looking for scope of a paper, the submission policy, checking ethics, checking declarations like competing interests and funding requests. Then they organise the review, assist the editors to make a decision, do the copy editing and technical editing.

Alice used an animal analogy – the cheetah representing the speed of peer review that authors would like to see, but a tortoise represented what they experience. This was very interesting given the Nature news piece that was published on 10 February “Does it take too long to publish research?

Will Frass is a Research Executive at Taylor & Francis and discussed the findings of a T&F study “Peer review in 2015 – A global view”. This is a substantial report and I won’t be able to do his talk justice here, there is some information about the report here, and a news report about it here.

One of the comments that struck me was that researchers in the sciences are generally more comfortable with single blind review than in the humanities. Will noted that because there are small niches in STM, double blind often becomes single blind anyway as they all know each other.

A question from the floor was that reviewers spend eight hours on a paper and their time is more important than publishers’. The question was asking what publishers can do to support peer review? While this was not really answered on the floor* it did cause a bit of a flurry on Twitter with a discussion about whether the time spent is indeed five hours or eight hours – quoting different studies.

*As a general observation, given that half of the participants at the conference were publishers, they were very underrepresented in the comment and discussion. This included the numerous times when a query or challenge was put out to the publishers in the room. As someone who works collaboratively and openly, this was somewhat frustrating.

The Sociology of Research

Professor James Evans, who is a sociologist looking at the science of science at the University of Chicago spoke about How research scientists actually behave as individuals and in groups.

His work focuses on the idea of using data from the publication process that tell rich stories into the process of science. James spoke about some recent research results relating to the reading and writing of science including peer reviews and the publication of science, research and rewarding science.

James compared the effect of writing styles to see what is effective in terms of reward (citations). He pitted ‘clarity’ – using few words and sentences, the present tense, and maintaining the message on point against ‘promotion’ – where the author claims novelty, uses superlatives and active words.

The research found writing with clarity is associated with fewer citations and writing in promotional style is associated with greater citations. So redundancy and length of clauses and mixed metaphors end up enhancing a paper’s search ability. This harks back to the conversation about poor academic writing the day before – bad writing is rewarded.

Scientists write to influence reviewers and editors in the process. Scientists strategically understand the class of people who will review their work and know they will be flattered when they see their own research. They use strategic citation practices.

James noted that even though peer review is the gold standard for evaluating the scientific record. In terms of determining the importance or significance of scientific works his research shows peer review is inconsistent and systematically biased. The greater the reviewer distance results in more positive reviews. This is possibly because if a person is reviewing work close to their speciality, they can see all the criticism. The greater the novelty of the work the greater likelihood it is to have a negative review. It is possible to ‘game’ this by driving the peer review panels. James expressed his dislike of the institution of suggesting reviewers. These provide more positive, influential and worse reviews (according to the editors).

Scientists understand the novelty bias so they downplay the new elements to the old elements. James discussed Thomas Kuhn’s concept of the ‘essential tension’ between the classes of ‘career considerations’ – which result in job security, publication, tenure (following the crowd) and ‘fame’ – which results in Nature papers, and hopefully a Nobel Prize.

This is a challenge because the optimal question for science becomes a problem for the optimal question for a scientific career. We are sacrificing pursuing a diffuse range of research areas for hubs of research areas because of the career issue.

The centre of the research cycle is publication rather than the ‘problems in the world’ that need addressing. Publications bear the seeds of discovery and represent how science as a system thinks. Data from the publication process can be used to tune, critique and reimagine that process.

James demonstrated his research that clearly shows that research today is driven by last year’s publications. Literally. The work takes a given paper and extracts the authors, the diseases, the chemicals etc and then uses a ‘random walk’ program. The result ends up predicting 95% of the combinations of authors and diseases and chemicals in the following year.

However scientists think they are getting their ideas, the actual origin is traceable in the literature. This means that research directions are not driven by global or local health needs for example.

Panel: Show me the Money

I sat on this panel discussion about ‘The financial implications of open access for researchers, intermediaries and readers’ which made it challenging to take notes (!) but two things that struck me in the discussions were:

Rick Andersen suggested that when people talk about ‘percentages’ in terms of research budgets they don’t want you to think about the absolute number, noting that 1% of Wellcome Trust research budget is $7 million and 1% of the NIH research budget is $350 million.

Toby Green, the Head of Publishing for the OECD put out a challenge to the publishers in the audience. He noted that airlines have split up the cost of travel into different components (you pay for food or luggage etc, or can choose not to), and suggested that publishers split APCs to pay for different aspects of the service they offer and allow people to choose different elements. The OECD has moved to a Freemium model where that the payment comes from a small number of premium users – that funds the free side.

As – rather depressingly – is common in these kinds of discussions, the general feeling was that open access is all about compliance and is too expensive. While I am on the record as saying that the way the UK is approaching open access is not financially sustainable, I do tire of the ‘open access is code for compliance’ conversation. This is one of the unexpected consequences of the current UK open access policy landscape. I was forced to yet again remind the group that open access is not about compliance, it is about providing public access to publicly funded research so people who are not in well resourced institutions can also see this research.

Research in Institutions

Graham Stone, the Information Resources Manager, University of Huddersfield talked about work he has done on the life cycle of open access for publishers, researchers and libraries. His slides are available.

Graham discussed how to get open access to work to our advantage, saying we need to get it embedded. OAWAL is trying to get librarians who have had nothing to do with OA into OA.

Graham talked the group through the UK Open Access Life Cycle which maps the research lifecycle for librarians and repository managers, research managers, fo authors (who think magic happens) and publishers.

My talk was titled ‘Getting an Octopus into a String Bag’. This discussed the complexity of communicating with the research community across a higher education institution. The slides are available.

The talk discussed the complex policy landscape, the tribal nature of the academic community, the complexity of the structure in Cambridge and then looked at some of the ways we are trying to reach out to our community.

While there was nothing really new from my perspective – it is well known in research management circles that communicating with the research community – as an independent and autonomous group – is challenging. This is of course further complicated by the structure of Cambridge. But in preliminary discussions about the conference, Mark Carden, the conference organiser, assured me that this would be news to the large number of publishers and others who are not in a higher education institution in the audience.

Summary: What does everybody want?

Mark Carden summarised the conference by talking about the different things different stakeholder in the publishing game want.

Researchers/Authors – mostly they want to be left alone to get on with their research. They want to get promoted and get tenure. They don’t want to follow rules.

Readers – want content to be free or cheap (or really expensive as long as something else is paying). Authors (who are readers) do care about the journals being cancelled if it is one they are published in. They want a nice clear easy interface because they are accessing research on different publisher’s webpages. They don’t think about ‘you get what you pay for.’

Institutions – don’t want to be in trouble with the regulators, want to look good in league tables, don’t want to get into arguments with faculty, don’t want to spend any money on this stuff.

Libraries – Hark back to the good old days. They wanted manageable journal subscriptions, wanted free stuff, expensive subscriptions that justified ERM. Now libraries are reaching out for new roles and asking should we be publishers, or taking over the Office of Research, or a repository or managing APCs?

Politicians – want free public access to publicly funded research. They love free stuff to give away (especially other people’s free stuff).

Funders – want to be confusing, want to be bossy or directive. They want to mandate the output medium and mandate copyright rules. They want possibly to become publishers. Mark noted there are some state controlled issues here.

Publishers – “want to give huge piles of cash to their shareholders and want to be evil” (a joke). Want to keep their business model – there is a conservatism in there. They like to be able to pay their staff. Publishers would like to realise their brand value, attract paying subscribers, and go on doing most of the things they do. They want to avoid Freemium. Publishers could be a platform or a mega journal. They should focus on articles and forget about issues and embrace continuous publishing. They need to manage versioning.

Reviewers – apparently want to do less copy editing, but this is a lot of what they do. Reviewers are conflicted. They want openness and anonymity, slick processes and flexibility, fast turnaround and lax timetables. Mark noted that while reviewers want credit or points or money or something, you would need to pay peer reviewers a lot for it to be worthwhile.

Conference organisers – want the debate to continue. They need publishers and suppliers to stay in business.

Published 18 February 2016
Written by Dr Danny Kingsley
Creative Commons License