Category Archives: Science Policy

Open Science: My List of Best Practices

IMG_20180708_144620_Bokeh

This has nothing to do with Open Science. I just piled these rocks up at Lake Huron

Are you interested in Open Science? Are you already implementing Open Science practices in your lab? Are you skeptical of Open Science? I have been all of the above and some recent debates on #sciencetwitter have been discussing the pros and cons of Open Science practices. I decided to write this article to share my experiences as I’ve been pushing my own research in the Open Science direction.

Why Open Science?

Scientists have a responsibility to communicate their work to their peers and to the public. This has always been part of the scientific method but the methods of communication have differed throughout the years and differ by fields. This essay reflects my opinions on Open Science (capitalized to reflect this as set of principles), and I also give an overview of my lab’s current practices. I’ve written about this in my lab manual (which is also open) but until I sat down to write this essay, I had not really codified how my lab and research has adopted Open Science practices. This should not be taken as a recipe for your own science, lab, and these ideas may not apply to other fields. This is just my experience trying to adopt Open Science practices in my Cognitive Psychology lab.

Caveats First

Let’s get a few things out of the way…

First, I am not an expert in open science. In fact until about 2-3 years ago, it never even occurred to me to create a reproducible archive for my data, or to ensure that I could provide analysis scripts to someone else so that they could reproduce my analysis, or that I would provide copies of all of the items / stimuli that I used in a psychology experiment. I’ve received requests for data before, but I usually handled those in a piecemeal, ad hoc fashion. If someone asked, I would put together a spreadsheet.

Second, my experience is only generalizable to other comparable fields. I work in cognitive psychology and have collected behavioural data, survey questionnaire data, and electrophysiological data. I realized data sharing can be complicated by ethics concerns for people who collect sensitive personal or health data. I realize that other fields collect complex biological data that may not lend itself well to immediate sharing.

Finally, the principles and best practices that I’m outlining here were adopted in 2018. Some of this was developed over the course of the last few years, but this is how we are running our lab now, and how we plan to run my research lab foreseeable future. That means there are still gaps: studies that were published a few years ago that have not yet been archived, papers that may not have a preprint, analyses that were done 20 years ago in SAS on the VAX 11/780 at University at Buffalo, and if anyone wants to see data from my well-cited 1998 paper on prototype and exemplar theory, I can get it, but it is not going to be easy.

Core Principles

There are many aspects to Open Science, but I am going to outline three areas that cover most of these. There will be some overlap and some aspects may be missed.

Materials and Methods

The first aspect of Open Science concerns openness with respect to methods, materials, and reproducibility. In order to satisfy this criteria, a study or experiment should be designed and written in such a way that another scientist or lab in the same field would be able to carry out the same kind of study if they wanted to. That means that any equipment that was used is described in enough detail or is readily available. This also means that computer programs that were used to carry out the study are accessible and the code is freely available. As well, in psychology, there are often visual, verbal, or auditory stimuli that participants make decisions about or questions that they answer. These should also be available.

Data and Analysis

The second aspect of Open Science concerns open availability of data that have been collected in the study. In psychology, data takes many forms, but usually refers to responses by participants on surveys, presentation of visual stimuli, recordings of EEG, data collected in an fMRI study. In other fields, it may consist of observations taken at a field station, measurements taken of an object or substance, or trajectories of objects in space. Anything that is measured, collected, analyzed for a publication should be available for other scientists in the field.

Of course, in a research study or scientific project, the data that have been collected are also processed and analyzed. Here, several decisions need to be made. It may not always be practical to share raw data, especially if things were recorded by hand in a notebook or if the digital files are so large as to be unmanageable. On the other hand, it may not be useful to publish data that have been processed and summarized too much. For most fields, there is probably a middle-ground where the data have been cleaned and minimally processed but no statistical analyses of been done, and the data have not been transformed. The path from raw data to this minimal state should be clear and transparent. In my experience so far, this is one of the most difficult decisions to make. I don’t have a solid answer yet.

In most scientific fields, data are analyzed using software and field-specific statistical techniques. Here again, several decisions need to be made while the research is being done in order to ensure that the end result is open and usable. For example, if you analyze your data with Microsoft Excel, what might be simple and straightforward to you might be uninterpretable to someone else. This is especially true if there are pivot tables, unique calculations entered into various cells, and transformations that have not been recorded. This, unfortunately, describes a large part of the data analysis I did as a graduate student in the 1990s. And I’m sure I’m not alone. Similarly, any platform that is proprietary will present limits to openness. This includes Matlab, SPSS, SAS, and other popular computational and analytic software. I think that’s why you see so many people who are moving towards Open Science practices encouraging the use of R and Python, because they are free, openly available, and they lend themselves well to scientific analysis.

Publication

The third aspect of Open Science concerns the availability of the published data and interpretations: the publication itself. This is especially important for any research that is carried out at a university or research facility that is supported by public research grants. Most of these funding agencies require that you make your research accessible.

There are several good open access research journals that make the publications freely available for anyone because the author helps to cover the cost of publication. But many traditional journals are still behind a paywall and are only available for paid subscribers. You may not see the effects of this if you’re working in a university because your institution may have a subscription to the journal. The best solution is to create a free and shareable version of your manuscript, a preprint, that is available on the web and that anyone can access but does not violate the copyright of the publisher.

Putting this in practice

I tried to put some guidelines in place in my lab to address these three aspects of open science. I started with one overriding principle: When I submit a manuscript for publication in a peer-reviewed journal, I should also ensure that at the time of submission, I have a complete data file that I can share, analysis scripts that I can share, and a preprint.

I implemented as much of this is possible with every project paper that we’ve submitted for publication since late 2017 and all our ongoing projects. We don’t submit a manuscript until we can meet the following:

  • We create a reprint of the manuscript that can be shared via a public online repository. We post this preprint to the online repository at the same time that we submit it to the journal.
  • We create shareable data files for all of the data collected in the study described in that manuscript. These are almost always unprocessed or minimally processed data in a Microsoft Excel spreadsheet or a text file. We don’t use Excel for any summary calculations, so the data are just data.
  • As we’re carrying out the data analysis, we document our analyses in R notebooks. We share the R scripts /notebooks for all of the statistical analyses and data visualizations in the manuscript. These are open and accessible and should match exactly what appears the manuscript. In some cases, we have posted R notebooks with additional data visualization beyond what is in the manuscript as a way to add value to the manuscript.
  • We also create a shareable document for any nonproprietary assessments or questionnaires that were designed for this study and copies of any visual or auditory stimuli used in the study.

Now on this list of best practices, it would be disingenuous to suggest that every single study paper from my lab meets all of those criteria. For example, one recently published study made use of Matlab instead of Python, because that’s how we knew how to analyze the data. But we’re using these principle as a guide as out work progresses. I view Open Science and these guidelines as an important and integral part of training my students. I view this as being just as important as the theoretical contributions that we’re making to the field.

Additional Resources and Suggestions

In order to achieve this goal, the following guidelines and resources have been helpful to me.

OSF

My public OFS profile lists current and recent projects. OSF stands for “open science Framework” and it’s one of many data repositories that can be used to share data, preprints, unformatted manuscripts, analysis code, and other things. I like OSF, and it’s kind of incredible to me that thus wonderful resource is free for scientists to use. But if you work at a University or public research institute, your library probably runs a public repository as well.

Preregistration

For some studies, preregistration may be helpful, additional step in carrying out the research. There are limits to preregistration, many of which are addressed with Registered Reports. At this point, we haven’t done any register reports. The preregistration is helpful though, because it encourages the researcher student to lay out a list of analyses they plan to do, to describe how the data are going to be collected, and to make that plan publicly available before the data are collected. This doesn’t mean that preregistered studies are necessarily better, but it’s one more tool to encourage openness in science.

Python and R

If you’re interested in open science it really is worth looking closely at R and Python for data manipulation, visualization, and analysis. In psychology, for example, SPSS has been a long-standing and popular way to analyze data. SPSS does have a syntax mode that allows the researcher to share their analysis protocol, but that mode of interacting with the program is much less common than the GUI version. Furthermore, SPSS is proprietary. If you don’t have a license, you can’t easily look at how the analyses were done. The same is true of data manipulation in Matlab. My university has a license, but if I want to share my data analysis with a private company, they may not have a license. But anyone in the world can install and use R and Python.

Conclusion

Science isn’t a matter of belief. Science works when people trust in the methodology, the data and interpretation, and by extension, the results. In my view, Open Science is one of the best ways to encourage scientific trust and to encourage knowledge organization and synthesis.

Inspiration in the Lab

I run a mid sized cognitive psychology lab: it’s me as the PI, 2 PhD students 3 master’s students and a handful of undergraduate honours students and RAs. We are a reasonably productive lab, but there are times when I think we could be doing more in terms of getting our work out and also coming up with innovative and creative ideas.

Lately I’ve been thinking of ways to break out of our routines. Research, in my opinion, should be a combination of repetition (writing, collecting data, running an analysis in R) but also innovation where we look at new techniques, new ideas, new explanations. How to balance these?  Also, I want to increase collaborative problem solving in my lab. Often a student has a data set and the most common process is the student and I working together, or me reviewing what she or he has done. But sometimes it would be great if we’re all aware of the challenges and promises of each other’s work. We have weekly lab meetings, but that’s not always enough.

What follows are some ideas I’d like to implement in the near future. I’d love to hear what works (and does not work) from other scientists.

An Afternoon of Code

We rely on software (PsychoPy, Python, R, and E-Prime) to collect behavioural data. We have several decent programs to run the experiments we want to run, but that is often a bottleneck, and all of us sometime struggle to translate ideas into code. One way to work on this might be to have a coding retreat or an afternoon of coding. We all agree to meet in my lab and we work on shared task or designing a paradigm that we’ve never used before. I’d put up a prize for the first student to solve the problem. As an example, I’m looking to get a version of the classic “weather prediction task“. We might agree to spend a day working on this, maybe each on our own program, but at the same time so we can share ideas.

Data Visualization and Analysis

Similar to the idea above, I am thinking of ways to improve our skills on R-Studio. One idea might be to have a set of data from the most recent study in our lab and we spend a day working together on R-Studio to explore different visualizations, techniques for parsing, etc. We each know different things and R allows for so much customization, but it would be helpful to be aware of each other’s skill set.

Writing at the Pub

Despite some of its limitations, I’ve been using Google Docs as a way to prepare manuscripts for publication. It’s not much worse than Word but really allows for better collaborative work and integrates smoothly with #Slack. With the addition of Paperpile, it’s a very competent document preparation system. So I thought about setting aside a few hours in the campus pub, bring our laptops, and all write together. Lab members that are working together on a paper can write simultaneously. Or we might pick one paper, and even grad students who are not authors per se would still be able to help with edits and idea. Maybe start with coffee/tea…then a beer or two.

Internal Replications

I’ve also thought about spending some time designing and implementing replications of earlier work. We already do this to some degree, but I have many published studies from 10 or more years ago that might be worth revisiting. I thought of meeting once every few month with my team to look at these and pick one to replicate. Then we work as a team to try to replicate the study as if it were someone else’s work (not ours) and run a full study. This would be done along side the new/current work in our lab.

Chefs learn by repeating the basic techniques over and over again until they master them and can produce a simple dish perfectly each time. I can think of no reason not to employ the same technique in my lab. I think the repetitive, inward focused nature of a task like this might also lead to new insights as we rediscover what led up to design a task or experiment in a certain way.

Conclusion

I am planning on taking these ideas to my trainees at a one of our weekly lab in the next few weeks. My goal is to just try a few new things to break up the routine. I’d welcome any comments, ideas, or suggestions.

Canada’s Scientific Priorities are Heading in the Wrong Direction

The funding of basic research in Canada is still healthy, but I think we may be heading in the wrong direction. In short, we’re still funding basic research, but it is less basic, and a greater proportion of the funding seems to go to those projects with explicit commercial ends.

A recent article in theNew York Times  highlighted the increasing unhappiness within the scientific community about the Canadian government’s recent shift from funding basic research to funding industry partnerships. This can have serious consequences, because  the government can be in the position to fund certain industries over others. One of the scientists who was interviewed for the NYT piece is Dr. Hüner, a Canada Research Chair in environmental-stress biology from my own university (The University of Western Ontario). His research studies how plants deal with stress and suboptimal conditions. His NSERC grant was cut by more than 50%, leading to job losses from his lab and, most importantly, a reduction in his research on environmental responses to climate change.

This kind of cut is one of the things that really troubles me. It appears that the Harper government is selectively cutting programs that deal with climate change and toxic waste effects. There have been other recent stories about closed science libraries and closure of the Experimental Lakes Area. It is difficult to avoid drawing the conclusions that Prime Minister Harper is trying to defund these programs more than others.

I should say that I am not against funding these industry partnerships (so NSERC, please do not cut my grant, I am still so very grateful!) Indeed, one of my own postdocs benefits from the Mitacs program which was specifically created to promote partnerships with industry. I just worry that the funding reallocations are politically motivated, or at least, they can appear to be political.

Canada (and even my university) overtly claim to want to be on the “world stage”.  And the world (or at least the NYT) is noticing the increasing unhappiness with our research funding allocations.