Tag Archives: peer review

Reflections on Open Research – a PI’s perspective

As part of the Open Research Pilot Project, Marta Teperek met with Dr David Savage and asked him several questions about his own views and motivations for Open Research. This led to a very inspiring conversation and great reflections on Open Research from the Principal Investigator’s perspective. The main points that came out of the discussion were:

  • Lack of reproducibility raises questions about scientific rigour, integrity and relevance of work in general
  • Being open is to work in a team and be collaborative
  • Open Research will benefit science as a whole, and not the careers of individuals
  • Peer review remains a critical aspect of the scientific process
  • Nowadays, global collaboration and information exchange is possible, making the data really robust
  • Funders should emphasise the importance of research integrity and scientific rigour

This conversation is reported below in the original interview format.

Motivations for doing Open Research

Marta: To start, could you tell me why you are keen on Open Research and why did you decide to get involved in the Open Research Pilot Project?

David: Sure, but before we start I wanted to stress that when I make comments about science, these are very general comments and they don’t apply to anyone in particular.

So my general feeling is that I am very concerned and disappointed about the lack of research reproducibility in science. Lack of reproducibility raises questions about scientific rigour, integrity and relevance of work in general. Therefore, I am really keen on exploring ways of addressing these failings of science and I want to make a contribution to solving these problems. Additionally, I am aware that I am not perfect either and I want to learn how I can improve my own practice.

Were there any particular experiences which made you realise the importance of Open Research?

This is just the general experience of reading and also reviewing far too many papers where I thought that the quality of underlying data was poor, or authors were exaggerating their claims without supporting evidence. There is too much hype around, and the general awareness about the number of papers published in high impact journals which cannot be reproduced makes the move to more transparent and open approaches necessary.

Do we need additional rewards for working openly?

How do you think Open Research could benefit academic careers?

I am not sure if Open Research could or should benefit academic careers – this should not be the goal of Open Research. The goal is to improve the quality of science and therefore the benefit of science to the public. Open Research will benefit science as a whole, and not the careers of individuals. Science has become very egotistical and badge –accumulating. We should be investigating things which we find interesting. We should not be motivated by the prize. We should be motivated by the questions.

In science we have far too many people who behave like bankers. Publishing seems to be the currency for them and thus they are sloppy and lack the necessary rigour just because they want to publish as fast as they can.

In my opinion it is the responsibility of every researcher to the profession to try to produce data which is robust. It is fine to make honest mistakes. But it is not acceptable to be sloppy or fraudulent, or not to read enough literature. These are simply not good enough excuses. I’m not claiming to be perfect. But I want to constantly improve myself and my research practice.

Barriers to greater openness in research

What obstacles may be preventing researchers from making their research openly available?

The obvious one is competition for funding, which creates the need to publish in high impact factor journals and consequently leads to the fear of being scooped. And that’s a difficult one to work around. That’s the reason why I do not make everything we do in my research group openly available. However, looking at this from society’s perspective, everything should be made openly available, and as soon as possible for the sake of greater benefit to mankind. So balance needs to be found.

Do you think that some researchers might want to make their research open, but might not know how to do it, or might not have the appropriate skills to do it?

Definitely. Researchers need to know about the best ways of making their research open. I am currently trying to work out how to make my own project’s website more open and accessible to others and what are the best ways of achieving this. So yes, awareness of tools and awareness of resources available is necessary, as well as training about working reproducibly and openly. In my opinion, Cambridge has a responsibility to be transparent and open about its processes.

Role of peer-review in improving the quality of research

What frustrates you most about the current scholarly communication systems?

Some people get frustrated with the business model of some of the major publishers. I do not have a problem with it, although I do support the idea of pre-print services, such as bioRxiv. Some researchers get frustrated about long peer-review process. I am used to the fact that peer-review is long, and I accept it because I do not want fraudulent papers to be published. However, flawed peer review, such as biased peer-review or lack of rigorous peer review, is not acceptable and it is a problem.

So how to improve the peer-review process?

I think that peer-reviewers need to have greater awareness of the need for greater rigour. I was recently asked to peer review an article. The journal had dedicated guidance for peer reviewers. However, the guidance did not contain any information about suitability to undertake the peer-reviewing work. Peer-reviewer guidance documents need to address questions like: Do you really know what the paper is about? Do you know the discipline well enough? Are there any conflicts of interest? Would you have the time to properly peer-review the work? Peer-review needs to be done properly.

What do you think about the idea of journals employing professional peer-reviewers, who could be experts in their respective fields and could perform unbiased, high quality peer-review?

This sounds very reasonable, as long as professional peer-reviewers stay up to date with science. Though this would of course cost money!

I suppose publishers have enough money to pay for this. Have you heard of open peer-review and what do you think about it?

I think it is fine, but it might be subject to cronyism. I suspect that most people will be more likely to agree for their reviews to be made open as long as they make a recommendation for the paper to be accepted.

I recently reviewed a paper of a senior person and I rejected it. But if I made my review open, it would pose a risk to me – what if the author of the paper I rejected was the reviewer of my future grant application? Would they still assess my grant application objectively? What if people start reviewing each other’s papers and start treating peer-review as a mechanism to exchange favours?

The future of Open Research is in your hands

Who or what inspires you and makes you optimistic about the future of Open Research?

In Cambridge and at the Wellcome Trust there are many researchers who care about the quality of science. These researchers inspire me. These are very clever people, who work hard and make important discoveries.

I am also inspired by teamwork and collaboration. In Big Data and in human genetics in particular, people are working collectively. Human genetics and epidemiology are excellent examples of disciplines where 10-20 years ago studies were too small to allow researchers to make significant and reproducible conclusions. Nowadays, global collaboration and information exchange is possible, making the data really robust. As a result, human genetics is delivering really important observations.

To me, part of being open is to work in a team and be collaborative.

If you had a magic wand and if you could get one thing changed to get more people share and open up their research, what would it be?

Not sure… I suppose I am still looking for it! Maybe I will find one during the Open Research Pilot Project. Seriously speaking, I do not believe that a single thing could make a difference. It is the little things that matter. For example, on my side I am trying to make my own lab and institute more aware of reproducibility issues and ensure that I can make a difference in my own environment.

So as a Group Leader, how do you ensure that researchers in your own group are rigorous in their approach?

First, I really make them aware of the importance of reproducible research and of scientific rigour. I am also making a lot of effort to ensure that my colleagues are up to date with literature. I ask them if they read important literature and if they are unable to answer I ask them to do their homework. I am also imposing rigorous standards for experiments. In my lab people repeat the key experiments, or those which are particularly surprising, in a blind fashion. It takes a lot of time and extra resources, but it is important not to be too quick and to validate findings before making claims.

I am also ensuring that my people are motivated. For example, even though everyone helps each other in my group, all PhD students have direct access to me and we have regular discussions about their work. It is important that your group is of a manageable size; otherwise, as a group leader, you will not know all your people and you will not be able to have regular discussions about their work.

How do you identify people who care about reproducible research when making hiring decisions?

I ask all prospective applicants to make a short presentation about their previous work. During their presentation I ask them to tell me exactly what their research question was and how confident they were about their discovery. I am looking for evidence of rigorous methodology, but also for honesty and for people who are not overselling their findings.

In addition, I ask about their career goals. If they tell me that their career goal is to publish in Nature, or have two papers in Science, I count this against them. Instead, I favour applicants who are question-driven, who want to make progress in understanding how things work.

Role of funding bodies in promoting Open Research

Do you think that funders could play a role in promoting Open Research?

Funders could definitely contribute to this. The Wellcome Trust is a particularly notable example of a funding body keen on Open Research. The Trust is currently looking into the best ways to make Open Research the norm. Through various projects such as the Open Research Pilot, the Trust helps researchers like myself to learn best practice on reproducible research,and also to understand the benefits of sharing expertise to improve skills across the research community.

Do you think funder policies to mandate more openness could help?

Potentially. However, policies on Open Access to publications are easy to mandate and relatively easy to interpret and implement. It is much more difficult for Open Research. What does Open Research mean exactly? The right scope and definitions would be key. What should be made open? How? The Wellcome Trust is already doing a lot of work on making important research results available, and human genomic data in particular. But making your proteomic and genomic data publicly available is slightly different from ensuring that your experiments are rigorous and your results honest. So in my opinion, funders should emphasise the importance of research integrity and scientific rigour.

To close our discussion, what do you hope to achieve through your participation in the Open Research Pilot Project?

I want to improve my own lab’s transparency. I want to make sure that we are rigorous and that our research is reproducible. So I want to learn. At the same time I wish to contribute to increased research integrity in science overall.

Acknowledgements

Marta Teperek would like to thank SPARC EUROPE and Dr Joyce Heckman for interviewing her for the Open Data Champions programme – many of the questions asked by Marta in the interview with Dr David Savage originate from inspiring, open questions prepared by SPARC EUROPE.

Published 22 June 2017
Written by Dr Marta Teperek

Creative Commons License

How to get the most out of modern peer-review

On 30th March the Office of Scholarly Communication hosted an event How to get the most out of modern peer-review, bringing together researchers, publishers and library staff to discuss how peer review is changing. Dr Laurent Gatto was both a presenter and a participant, and with permission we have re-posted his blog about the event here.

Publisher presentations

There were presentations from eLife (Dr Wei Mun Chan) and F1000Research (Dr Sabina Alam, @Sab_Ra) in the Innovations in peer-review session. PeerJ was mentioned several times, for publishing their peer reviews, for example.

I general, I think the presenters did a good job in demonstrating modern peer review on how it can benefit authors and research in general: eLife with its consultative peer review, where editors and reviewers discuss their views and opinions before a decision is made, and F1000Research with their open post-publication peer review system. My personal experience with PeerJ (as a reviewer) and F1000Research (as a reviewer and author) have been excellent. All these journals are great venues for a modern open scholar.

Dr Jen Wright (@JennWrights) from Cambridge University Press presented a nice and detailed overview of how peer review works. I was well structured, following a FAQ model. She also very entertainingly illustrated her talk with references to PHDcomis, Lego Grad Student and Shit Academics Say.

.@JennWrights uses @legogradstudent to illustrate her peer review faq at  (View image in Twitter).

Open peer review

The highlight of the day was Corina’s (@LoganCorina) brilliant Open peer review – what is it and what does it achieve? talk. She made a strong point in favour of open peer review and reviewing ethics. Read her lab code of conduct about reviewing ethics, as well as publishing ethics, her commitment to conducting rigorous science, lab interpersonal interactions.

I was nice to hear how her efforts in ethical publishing and reviewing proved to have been very positive for her academic career, which contrasts to the fear that some early career researcher sometimes express that practising open science and ethical publishing could hinder their careers.

The role of peer-reviewers in promoting open science

I was also very happy to have the opportunity to give a talk about the role of peer review in promoting open science. My slides are available here. I plan to write it up and expand on it in a blog post.

In brief, my main message was that, it we want to promote rigorous science, we have an obligation to make sure that the data, software and methods are adequately shared and described, and that it was not too difficult or time consuming to check this as a peer reviewer.

Currently, as far as data is concerned, my ideal review system would be a 2-stage process, where

  1. Submit your data and meta-data to a repository, where it get’s checked (by specialists, data scientists, data curators) for quality, annotation, meta-data.
  2. Submit your research with a link to the peer reviewed data.

My talk earned me a lot of feed back and encouragements, both off and online.

View image on Twitter

The effect on my twitter activity today – the 12 – 2pm bar is 1689 impressions 🙂 (View image on Twitter)

Publons

I had heard about Publons before, but never took the time to learn more about it. Tom Culley made a great job at presenting it as a means to Getting formal recognition for your peer review work. I will definitely give it a go in the near future.

Show me the data

I went to Dr Varsha Khodiyar’s (@varsha_khodiyar) workshop Show me the data : tips and tricks with peer-reviewing research data. Varsha is the data curation editor at Scientific Data. I am not necessarily a big fan of data journals (see here for some background), but it is clear that she is doing great work making sure that the data that she checks and curated (in addition to the peer review) is available under an open license and of good quality.

When it comes to data/software submissions, I believe that often, many shortcomings are more a result of lack of adequate skills or experience in the process of good practice in sharing and documenting, rather that poor quality of the output. The review process should ideally serve as a way to support and education researchers, and the Bioconductor and rOpenSci projects are great examples of how the package review process is a way to genuinely help the authors to improve on their output, rather than a binary accept/reject outcome.

A closed 2-stage peer review, like is typically in place in journals is a horrible system for than. An open review, with more interactions between reviewers and authors would be a more efficient approach.

More about the event

To hear more about the event, have a look at the #oscpeereview hashtag on twitter. The event was live streamed and will be made available on YouTube in the coming day – I will add a link later.

All in all, I think it was a great event. Kudos to the Office of Scholarly Communication for their efforts and continuous dedication. As emphasised by many participants, events like this constitute a unique and important channel highlighting important innovations in digital and open science that are redesigning scholarship. They are also a unique venue where open researcher can express and discuss challenges and opportunities with the wider academic community.

Published 4 April 2017
Written by Dr Laurent Gatto
Creative Commons License

The case for Open Research: solutions?

This series arguing the case for Open Research has to date looked at some of the issues in scholarly communication today. Hyperauthorship, HARKing, the reproducibility crisis, a surge in retractions all stem from the requirement that researchers publish in high impact journals. The series has also looked at the invalidity of the impact factor and issues with peer review.

This series is one of an increasing cacophony of calls to move away from this method of rewarding researchers. Richard Smith noted in a recent BMJ blog criticising the current publication in journal system: “The whole outdated enterprise is kept alive for one main reason: the fact that employers and funders of researchers assess researchers primarily by where they publish. It’s extraordinary to me and many others that the employers, mainly universities, outsource such an important function to an arbitrary and corrupt system.”

Universities need to open research to ensure academic integrity and adjust to support modern collaboration and scholarship tools, and begin rewarding people who have engaged in certain types of process rather than relying on traditional assessment schemes. This was the thrust of a talk in October last year”Openness, integrity & supporting researchers“. If nothing else, this approach makes ‘nightmare scenarios’ less likely. As Prof Tom Cochrane said in the talk, the last thing an institution needs is to be on the front page because of a big fraud case. 

What would happen if we started valuing and rewarding other parts of the research process? This final blog in the series looks at opening up research to increase transparency. The argument suggests we need to move beyond rewarding only the journal article – and not only other research outputs, such as data sets but research productivity itself.

So, let’s look at how opening up research can address some of the issues raised in this series.

Rewarding study inception

In his presentation about HARKing (Hypothesising After the Results are Known) at FORCE2016 Eric Turner, Associate Professor OHSU suggested that what matters is the scientific question and methodological rigour. We should be emphasising not the study completion but study inception before we can be biased by the results.  It is already a requirement to post results of industry sponsored research in ClinicalTrials.gov – a registry and results database of publicly and privately supported clinical studies of human participants conducted around the world. Turner argues we should be using it to see the existence of studies.  He suggested reviews of protocols should happen without the results (but not include the methods section because this is written after the results are known).

There are some attempts to do this already. In 2013 Registered Reports was launched: “The philosophy of this approach is as old as the scientific method itself: If our aim is to advance knowledge then editorial decisions must be based on the rigour of the experimental design and likely replicability of the findings – and never on how the results looked in the end.” The proposal and process is described here. The guidelines for reviewers and authors are here, including the requirement to “upload their raw data and laboratory log to a free and publicly accessible file-sharing service.”

This approach has been met with praise by a group of scientists with positions on more than 100 journal editorial boards, who are “calling for all empirical journals in the life sciences – including those journals that we serve – to offer pre-registered articles at the earliest opportunity”. The signatories noted “The aim here isn’t to punish the academic community for playing the game that we created; rather, we seek to change the rules of the game itself.” And that really is the crux of the argument. We need to move away from the one point of reward.

Getting data out there

There is definite movement towards opening research. In the UK there is now a requirement from most funders that the data underpinning research publications are made available. Down under, the Research Data Australia project is a register of data from over 100 institutions, providing a single point to search, find and reuse data. The European Union has an Open Data Portal.

Resistance to sharing data amongst the research community is often due to the idea that if data is released with the first publication then there is a risk that the researcher will be ‘scooped’ before they can get those all-important journal articles out. In response to this query during a discussion with the EPSRC it was pointed out that the RCUK Common Principles state that those who undertake Research Council funded work may be entitled to a limited period of privileged use of the data they have collected to enable them to publish the results of their research. However, the length of this period varies by research discipline.

If the publication of data itself were rewarded as a ‘research output’ (which of course is what it is), then the issue of being scooped becomes moot. There have been small steps towards this goal, such as a standard method of citing data.

A new publication option is Sciencematters, which allows researchers to submit observations which are subjected to triple-blind peer review, so that the data is evaluated solely on its merits, rather than on the researcher’s name or organisation. As they indicate “Standard data, orphan data, negative data, confirmatory data and contradictory data are all published. What emerges is an honest view of the science that is done, rather than just the science that sells a story”.

Despite the benefits of having data available there are some vocal objectors to the idea of sharing data. In January this year a scathing editorial in the New England Journal of Medicine suggested that researchers who used other people’s data were ‘research parasites’. Unsurprisingly this position raised a small storm of protest (an example is here). This was so sustained that four days later a clarification was issued, which did not include the word ‘parasites’.

Evaluating & rewarding data

Ironically, one benefit of sharing data could be an improvement to the quality of the data itself. A 2011 study into why some researchers were reluctant to share their data found this to be associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.

Professor Marcus Munafo in his presentation at the Research Libraries UK conference held earlier this year suggested that we need to introduce quality control methods implicitly into our daily practice. Open data is a very good step in that direction. There is evidence that researchers who know their data is going to be made open are more thorough in their checking of it. Maybe it is time for an update in the way we do science – we have statistical software that can run hundreds of analysis, and we can do text and data mining of lots of papers. We need to build in new processes and systems that refine science and think about new ways of rewarding science.

So should researchers be rewarded simply for making their data available? Probably not, some kind of evaluation is necessary. In a public discussion about data sharing held at Cambridge University last year, there was the suggestion that rather than having the formal peer review of data, it would be better to have an evaluation structure based on the re-use of data – for example, valuing data which was downloadable, well-labelled and re-usable.

Need to publish null results

Generally, this series looking at the case for Open Research has argued that the big problem is the only thing that ‘counts’ is publication in high impact journals. So what happens to all the results that don’t ‘find’ anything?

Most null results are never published with a study in 2014 finding that of 221 sociological studies conducted between 2002 and 2012, only 48% of the completed studies had been published. This is a problem because not only is the scientific record inaccurate, it means  the publication bias “may cause others to waste time repeating the work, or conceal failed attempts to replicate published research”.

But it is not just the academic reward system that is preventing the widespread publication of null results – the interference of commercial interests on the publication record is another factor. A recent study looked into the issue of publication agreements – and whether a research group had signed one prior to conducting randomised clinical trials for a commercial entity. The research found that  70% of protocols mentioned an agreement on publication rights between industry and academic investigators; in 86% of those agreements, industry retained the right to disapprove or at least review manuscripts before publication. Even more concerning was  that journal articles seldom report on publication agreements, and, if they do, statements can be discrepant with the trial protocol.

There are serious issues with the research record due to selected results and selected publication which would be ameliorated by the requirement to publish all results – including null results.

There are some attempts to address this issue. Since June 2002 the Journal of Articles in Support of the Null Hypothesis has been published bi-annually. The World Health Organisation has a Statement on the Public Disclosure of Clinical Trial Results, saying: “Negative and inconclusive as well as positive results must be published or otherwise made publicly available”. A project launched in February last year by PLOS ONE is a collection focusing on negative, null and inconclusive results. The Missing Pieces collection had 20 articles in it as of today.

In January this year there were reports that a group of ten editors of management, organisational behaviour and work psychology research had pledged they would publish the results of well-conceived, designed, and conducted research even if the result was null.  The way this will work is the paper is presented without results or discussion first and it is assessed on theory, methodology, measurement information, and analysis plan.

Movement away from using the impact factor

As discussed in the first of this series of blogs ‘The mis-measurement problem‘, we have an obsession with high impact journals. These blogs have been timely, falling as they have within what seems to be a plethora of similarly focused commentary. An example is a recent Nature news story by Mario Biagioli, who argued the focus on impact of published research has created new opportunities for misconduct and fraudsters. The piece concludes that “The audit culture of universities — their love affair with metrics, impact factors, citation statistics and rankings — does not just incentivize this new form of bad behaviour. It enables it.”

In recent discussion amongst the Scholarly Communication community about this mis-measurement the suggestion that we can address the problem by limiting the number of articles that can be submitted for promotion was raised. This ideally reduces the volume of papers produced overall, or so the thinking goes. Harvard Medical School and the Computing Research Association “Best Practices Memo” were cited as examples by different people.

This is also the approach that has been taken by the Research Excellence Framework in the UK – researchers put forward their best four works from the previous period (typically about five years). But it does not prevent poor practice. Researchers are constantly evaluated for all manner of reasons. Promotion, competitive grants, tenure, admittance to fellowships are just a few of the many environments a researcher’s publication history will be considered.

Are altmetrics a solution? There is a risk that any alternative indicator becomes an end in itself. The European Commission now has an Open Science Policy Platform, which, amongst other activities has recently established an expert group to advise on the role of metrics and altmetrics in the development of its agenda for open science and research.

Peer review experiments

Open peer review is where peer review reports identify the reviewers and are published with the papers.  One of the more recent publishers to use this method of review is the University of California Press’ open access mega journal called Collabra, launched last year. In an interview published by Richard Poynder, UC Press Director Alison Mudditt notes that there are many people who would like to see more transparency in the peer review process. There is some evidence to show that identifying reviewers results in more courteous reviews.

PLOS One publishes work after an editorial review process which does not include potentially subjective assessments of significance or scope to focus on technical, ethical and scientific rigor. Once an article is published readers are able to comment on the work in an open fashion.

One solution could be that used by CUP journal JFM Rapids, which has a ‘fast-track’ section of the journal offering fast publication for short, high-quality papers. This also operates a policy whereby no paper is reviewed twice, thus authors must ensure that their paper is as strong as possible in the first instance. The benefit is it offers a fast turnaround time while reducing reviewer fatigue.

There are calls for post publication peer review, although some attempts to do this have been unsuccessful, there are arguments that it is simply a matter of time – particularly if reviewers are incentivised. One publisher that uses this system is the platform F1000Research which publishes work immediately and invites open post-publication review. And, just recently, Wellcome Open Research was launched using services developed by F1000Research. It will make research outputs available faster and in ways that support reproducibility and transparency. It uses an open access model of immediate publication followed by transparent, invited peer review and inclusion of supporting data.

Open ways of conducting research

All of these initiatives demonstrate a definite movement towards an open way of doing research by addressing aspects of the research and publication process. But there are some research groups that are taking a holistic approach to open research.

Marcus Munafo published last month a description of the experience the UK Center for Tobacco and Alcohol Studies and the MRC Integrative Epidemiology Unit at the University of Bristol over the past few years of attempting to work within an Open Science Model focused on three core areas:  study protocols, data, and publications.

Another example is the Open Source Malaria project which includes researchers and students using open online laboratory notebooks from around the world including Australia, Europe and North America. Experimental data is posted online each day, enabling instant sharing and the ability to build on others’ findings in almost real time. Indeed, according to their site ‘anyone can contribute’. They have just announced that undergraduate classes are synthesising molecules for the project. This example fulfils all of the five basic principles of open research suggested here.

The Netherlands Organisation for Scientific Research (NWO) has just announced that it is making 3 million euros available for a Replication Studies pilot programme. The pilot will concentrate on the replication of social sciences, health research and healthcare innovation studies that have a large impact on science, government policy or the public debate. The intention after this study will be to “include replication research in an effective manner in all of its research programmes”.

A review of literature published this week has demonstrated that open research is associated with increases in citations, media attention, potential collaborators, job opportunities and funding opportunities. These findings are evidence, the authors say,  “that open research practices bring significant benefits to researchers relative to more traditional closed practices”.

This series has been arguing that we should move to Open Research as a way of changing the reward system that bastardises so much of the scientific endeavour. However there may be other benefits according to a recently published opinion piece which argues that Open Science can serve a different purpose to “help improve the lot of individual working scientists”.

Conclusion

There are clearly defined problems within the research process that in the main stem from the need to publish in  high impact journals. Throughout this blog there are multiple examples of initiatives and attempts to provide alternative ways of working and publishing.

However, all of this effort will only succeed if those doing the assessing change the rules of the game. This is tricky. Often the people who have succeeded have some investment in the status quo remaining. We need strong and bold leadership to move us out of this mess and towards a more robust and fairer future. I will finish with a quote that has been attributed to Mark Twain, Einstein and Henry Ford. “If you always do what you’ve always done, you’ll always get what you’ve always got”. It really is up to us.

Published 2 August 2016
Written by Dr Danny Kingsley
Creative Commons License