Category Archives: Inspiration

The Scientific Workflow: A Guide for Psychological Science

When new students or postdocs enter into your lab, do you have a plan or a guide for them? I have a lab manual that explains roles and responsibilities, but I did not (until recently) have a guide for how we actually do things. So I’ve made it my mission in 2019 and 2020 to write these things down and keep them updated. The Lab Manual (see above) is about roles and responsibility, mentorship, EDI principles, and lab culture.

This current guide, which I call the Scientific Workflow, is my guide for doing psychological science.  I wrote this to help my own trainees after a lab meeting last where we discussed ideas around managing our projects. It started as a simple list, and I’m now making it part of my lab manual. You can find a formatted version here, and the LaTex files here.

IMG_20191029_075744

Nothing related to science here, but a beautiful picture of campus from our research building

Introduction

This is my guide for carrying out cognitive psychology and cognitive science research in my lab. The workflow is specific to my lab, but can be adapted. If you think this is helpful, please feel free to share and adapt for your own use. You can keep this workflow in mind when you are planning, conducting, analyzing, and interpreting scientific work. You may notice two themes that seem to run throughout the plan: documenting and sharing. That’s the take home message: Document everything you do and share your work for feedback (with the group, your peers, the field, and the public). Not every project will follow this outline, but most will.

Theory & Reading

The first step is theory development and understanding the relationship of your work to the relevant literature. For example, my research is based in cognitive science and I develop and test theories about how the mind forms concepts and categories. My lab usually works from two primary theories. 1) Prototype / exemplar theory , which deals with category representations; and 2) multiple systems theory (COVIS is an example) which addresses the category learning process and rule use. I follow these topics on line and in the literature.

Screen Shot 2020-01-18 at 6.49.03 AM

Paperpile is a great way to organize, annotate and share papers. See my article here.

You should keep up with developments in the field using Google Scholar alerts and its recommendations. I check every week and I recommend that you do as well. We want to test the assumptions of these theories, understand what they predict, test their limitations and contrast with alternative accounts. We’re going to design experiments that help understand the theory, the models, and make refinements and/or reject some aspects of our theories.

  • Use Google Scholar to find updates that are important for your research.
  • Save papers in Paperpile (or Zotero) and annotate as needed.
  • Document your work in Google Docs (or another note taking app).
  • Share interesting papers and preprints with the whole lab group in the relevant channel(s) in Slack.

Hypotheses Generation

Hypotheses are generated to test assumptions and aspects of the theory and to test predictions of other theories. The hypothesis is a formal statement of something that can be tested experimentally and these often arise from more general research questions which are often broad statements about what you are interested in or trying to discover. You might arrive at a research question or an idea while reading a paper, at a conference, while thinking about an observation you made, or by brainstorming in an informal group or lab meeting.

IMG_20190529_095011

A lab meeting with my student learning fNIRS

Notice that all of these assume you that put in some time and effort to understanding the theory and then allow some time to work over ideas in your mind, on paper, or in a computer simulation.

  • Work on hypothesis generation in lab meetings, our advisory meetings, and on your own.
  • Document your thoughts in Google Docs (or your own notes on paper, OneNote or Evernote).
  • Share insights in lab meetings and in the relevant channel in Slack.

Design the Study/Experiment

Concurrent with hypothesis generation is experimental design. In most case, we are designing experiments to test hypotheses about category representation and category learning and/or the predictions of computational models. We want to test hypothesis generated from theories and also carry out exploratory work to help refine our theories. Avoid the temptation to put the cart before the horse and come up with experiments and studies that will produce an effect for its own sake. We don’t just want to generate effects.

The design comes first. Consider the logic of your experiment, what you plan to manipulate, and what you want to measure. Avoid the temptation to add in more measures than you need, just to see if there’s an effect. For example, do you need to add in 2-3 measures of working memory, mood, or some demographic information just to see if there’s an effect there? If it’s not fully justified, it may hurt more than help because you have non-theoretically driven measures to contend with. I’ve been guilty of this in the past and it always comes back to haunt me.

  • Work on experiment generation in lab meetings, advisory meetings, on your own.
  • Document your work and ideas in Google Docs or a note taking app that you can share.
  • Use G*Power to estimate correct sample size.
  • Use PsychoPy or Qualtrics to build your experiment.
  • Test these experiment protocols often, on your self, on lab mates, on volunteers
  • Develop a script for research assistants who will be helping you carry out the study.
  • Share insights in lab meetings and in the relevant channel in Slack.
  • Organize tasks and chores in the relevant Trello board for your project.

Analysis Plan & Ethics Protocol

This is where we start to formalize things. An analysis plan will link together the hypothesis and the experimental design with the dependent variables and/outcome measures. In this plan, we’ll describe and document how the data will be collected, visualized, analyzed, stored and shared. This plan should describe how we will deal with outlier data, missing data, data from participants who did not complete the experiment correctly, experimenter error, malfunction, etc. This plan can include tentative predictions derived from a model and also a justification of how we intend to analyze and interpret the data. This plan can also be pre-registered with OSF, which is where we’ll plan to share the data we collect with the scientific community.

At the same time we also want to write an ethics protocol. This is a description of our experiment, the research question, and procedures for the University REB. This will also include standardized forms for information and consent, a policy for recruitment, subject safety, data storage and security. The REB has templates and examples, and our lab Slack channel on ethics can include examples as well. Use templates when ever possible.

Both of these documents, the analysis plan and the ethics protocol, should describe exactly what we are doing and why we are doing it. They should provide enough information that someone else would be able to reproduce our experiments in their own lab. These documents will also provide an outline for your eventual method section and your results section.

  • Document your analysis plan and ethics protocol work in Google Docs.
  • Link these documents to the project sheet or Trello board for your project.
  • Share in the relevant channel in Slack.

Collect Data

Once the experiment is designed, the stimuli have been examined, we’re ready to collect data or to obtain data from a third party (which might be appropriate for model testing). Before you run your first subject, however, there are some things to consider. Take some time to run yourself through every condition several times and ask other lab members to do the same. You can use this to make sure things are working exactly as you intend, to make sure the data are being saved on the computer, and to make sure the experiment takes as long as planned.

When you are ready to collect data for your experiment:

  • Meet with all of your research volunteers to go over the procedure.
  • Book the experiment rooms on the Google Calendar.
  • Reserve a laptop or laptops on the Google Calendar.
  • Recruit participants though SONA or flyers.
  • Prepare the study for M-Turk or Prolific
  • Use our lab email for recruitment.

After you have run through your experiment several time, documented all the steps, and ensured that the everything is working exactly as you intended, you are ready to begin. While you are running your experiment:

  • Document the study in Google Docs, Trello, and/or Slack (as appropriate)
  • Make a note of anything unusual or out of the ordinary for every participant in a behavioural study.
  • Collect signatures from participants if you are paying them.
  • Data should stored in text files that can be opened with Excel or Google sheets or imported directly into R. Be sure these are linked to the project sheet.
  • Make sure the raw data are labelled consistently and are never altered.
  • Be sure to follow the data storage procedures outlined in the ethics protocol.

Data Management

Your data plan should specify where and how to store your data. While you are collecting data you should be working on a script in R (or Python) to extract and summarize the raw data according to your plan. When you reach the planned sample size, ensure that all of that data are secure and backed up and do an initial summary with your R script.

As you work on summarizing and managing your data:

  • Make notes in the project sheet and/or Trello board about where the data are stored
  • Document your steps in an R Notebook (or Python Notebook).

Plots & Stats

Remember the photo of Dr. Katie Bouman, then a postdoc, when she first saw the rendering of the first photos of a black hole that her algorithms generated? That’s the best part of science: seeing your data visualized for the first time. When you have completed your experiment and taken care of the data storage and basic processing, it’s time to have fun and see what you discovered. The analysis plan is your guide and your analysis plan describes how you want to analyze the data, what your dependent variables are, and how to conduct statistical test with you data to test the hypothesis. But before you do any statistics, work on visualizing the data. Use your R notebook to document everything and generate boxplots, scatter plots, or violin plots to see the means, medians, and the distribution for the data.

Because you are using R Notebooks to do the analysis, you can write detailed descriptions of how you created the plot, what the plot is showing, and how we should interpret the plot. If you need to drop or eliminate a subject’s data for any reason, exclude them in from data set in R, do not delete the data from the raw data file. Make a comment in the script of which subject was dropped and why. This will be clear and transparent.

You can also use R to conduct the tests that we proposed to use in the analysis plan. This might be straightforward ANOVA or t-test, LME models, regression, etc. Follow the plan you wrote, and if you deviate from the plan, justify and document that exploratory analysis.

If you are fitting a decision boundary model to your data, make sure you have the code for the model (these will be on my GitHub) and you should do your modelling separately from the behavioural analysis. The GLM models are saved as R scripts but you should copy or fork to your R-Notebooks for your analysis so you can document what you did. Make sure that you develop the version for your experiment and that the generic model is not modified.

If you are fitting a prototype or exemplar model, these have been coded in Python. Use Python 3 and a basic text editor or JupyterLab. JupyterLab might be better as it’s able to generate markdown and reproducible code like R Notebooks. Or just call python from R Studio.

  • Follow your analysis plan.
  • Consult with me or your peers if you notice any unusual patterns with anything.
  • Make notes in the project sheet and/or Trello board about what analyses you’ve completed.
  • Document your steps in an R Notebook (or Python Notebook).
  • If you drop a participant for any reason, indicate in the comments of your R script (or other notes). We want this information to recorded and transparent.

Present and Explain Your Work

While you working on your analysis, you should present the interim work often in lab meetings for the rest of the group and we can discuss the work when we meet individually. The reason to present and discuss often is to keep the ideas and work fresh in your mind by reviewing manageable pieces of it. If you try to do too much at once, you may miss something or forget to document a step. Go over your work, make sure its documented, and then work on the new analyses, and repeat. You should be familiar with your data and your analysis so that you can explain it to yourself, to me, to your peers, end eventually anyone who reads your paper.

Use the following guidelines for developing your work:

  • Make your best plots and figures.
  • Present these to the lab on a regular basis.
  • Use RPubs to share summary work instantly with each other and on the web
  • Keep improving the analysis after each iteration.
  • You should always have 8-10 slides that you can present to the group.
  • Document your work in R Notebooks, Google Docs, Trello, and Google Slides.

Write Papers Around This Workflow

The final step is to write a paper that describes your research question, your experimental design, your analysis, and your interpretation of what the analysis means. A scientific paper, in my opinion has two important features:

  1. The paper should be clear and complete. That means it describes exactly what you wanted to find out, how and why you designed your experiment, how you collected your data, how you analyzed your data, what you discovered, and what that means. Clear and complete also means that it can be used by you or by others to reproduce your experiments.
  2. The paper should be interesting. A scientific paper should be interesting to read. It needs to connect to a testable theory, some problem in the literature, an unexplained observation. It is just as long as it needs to be.

I think the best way to generate a good paper is to make good figures. Try to tell the story of your theory, experiment, and results with figures. The paper is really just writing how you made the figures. You might have a theory or model that you can use a figure to explain. You can create clear figures for the experimental design, the task, and the stimuli. Your data figures, that you made according to you analysis plan, will frame the results section and a lot of what you write is telling the reader what they show, how you made them, and what they mean figures. Writing a scientific paper is writing a narrative for your figures.

Good writing requires good thinking and good planning. But if you’ve been working on your experiment according to this plan, you’ve already done a lot of the thinking and planning work that you need to do to write things out. You’ve already made notes about the literature and prior work for your introduction. You have notes from your experimental design phase to frame the experiment. You have an ethics protocol for your methods section and an analysis plan for your results. You’ll need to write the discussion section after you understand the results, but if you’ve been presenting your 8-10 slides in lab meeting and talking about them you will have some good ideas and the writing should flow. Finally, if you’ve been keeping track of the papers in Paperpile, your reference section should be easy.

Submit the paper

The final paper may have several experiments, each around the theme set out in the introduction. It’s a record of what we did, why we did it, and how. The peer reviewed journal article is the final stage, but before we submit the paper we have a few other steps to ensure that our work roughly conforms to the principles of Open Science, each of which should be straightforward if we’ve followed this plan.

  • Create a publication quality preprint using the lab template. We’ll host this on PsyArXiv (unless submitting a blind ms.)
  • Create a file for all the stimuli or materials that we used and upload to OSF.
  • Create a data archive with all the raw, de-identified data and upload to OSF.
  • Upload a clean version of your R Notebook that describe your analyses and upload to OSF.

The final steps are organized around the requirements of each journal. Depending on where we decide to submit our paper, some of these may change. Some journals will insist on a word doc file, others will allow for PDF. In both cases, assume that the Google Doc is the real version, and the PDF or the .doc files are just for the journal submission. Common steps include:

  • Download the Google Doc as a MS Word Doc or PDF.
  • Create a blind manuscript if required.
  • Embed the figures if possible otherwise place at the end.
  • Write a cover letter that summarizes paper and why we are submitting.
  • Identify possible reviewers.
  • Write additional summaries as required and generate keywords.
  • Check and verify the names, affiliations, and contact information for all authors.
  • Submit and wait for 8-12 weeks!

Conclusion

As I mentioned at the outset, this might not work for every lab or every project. But the take home message–document everything you do and share your work for feedback–should resonate with most science and scholarship. Is it necessary to have a formal guide? Maybe not, though I found it instructive for me as the PI to write this all down. Many of these practices were already in place, but not really formalized. Do you have a similar document or plan for your lab? I’d be happy to hear in the comments below.

Dealing with Failure

When the hits come, they really come hard.

I’m dealing with some significant personal/professional failures this month.

I put in for two federal operating grants this past year: one from NSERC to fund my basic cognitive science work on learning and memory and one from SSHRC to fund some relatively new research on mindfulness meditation. I worked pretty hard on these last fall.

And today I found out that neither were funded.

IMG_20180728_204415-EFFECTS

Port Stanley Beach, ON 2018

This means that for the first time in a long number of years, my lab does not have an active federal research grant. The renewal application from NSERC is particularly hard to swallow, since I’ve held multiple NSERC grants and they have a pretty high funding rate relative to other programs. I feel like the rug was pulled out from under me and worry about how to support the graduate students in my lab. I can still carry on doing good research this coming year, and I have some residual funds, but I won’t lie: this is very disappointing.

The cruelest month, the cruelest profession.

It’s often said that academic/scientific work loads heavily on dealing with failure. It’s true. I’ve had failed grants before. Rejected manuscripts. Experiments that I thought were interesting or good that fell apart with additional scrutiny. For every success, there are multiple failures. And that’s all just part of being a successful academic. Beyond that, many academics may work 6-8 years to get a PhD, do a post doc, and find themselves being rejected from one job after another. Other academics struggle with being on the tenure track and may fail to achieve that milestone.

And April really truly is the cruelest month in academics.  Students may have to deal with: rejection from grad school, med school, graduate scholarships, job applications, internships, residency programs. They worry about their final exams. Faculty worry about rejection from grants, looking for jobs, and a whole host of other things. (and at least here in Canada, we still have snow in the forecast…)

Why am I writing this?

Well, why not? I’m not going to hide these failures in shame. Or try to blame someone else. I have to look these failures in the eye, own them, take responsibility for them, and keep working. Part of that means taking the time to work through my emotions and feelings about this. That’s why I’m writing this.

I’m also writing, I guess, to say that it’s worth keeping in mind that we all deal with some kind of stress or anxiety or rejection. Even people who seem to have it together (like me, I probably seem like I have it together: recently promoted to Full Professor, respectable research output, I’ve won several teaching awards, written a textbook, and have been a kind and decent teacher and mentor to 100s of students)…we all get hits. But really, I’m doing fine. I’m still lucky. I’m still privileged. I know that others will be hurting more than I am. I have no intention to wallow in pity or fight with rage. I’m not going to stop working. Not going to stop writing, doing research or trying to improve as a teacher. Moving forward is the only way I can move.

Moving on

We all fail. The question is: What are you going to do about it?

From a personal standpoint, I’m not going to let this get me down. I’ve been in this boat before. I have several projects that are now beginning to bear fruit. I’ve had a terrific insights about some new collaborative work. I have a supportive department and I’m senior enough to weather quite a lot. (thought I’m not Job, so you don’t have to test me Lord!)

From a professional standpoint, though, I think I know what the problems were and I don’t even need to see the grant reviews or committee comments (though I will be looking at them soon). There’s only one of me and branching off into a new direction three years ago to pursue some new ideas took time away from my core program, and I think both suffered a bit as a result. That happens, and I can learn from that experience.

I’ll have to meet with my research team and students next week and give them the bad news. We’re going to need to probably have some difficult conversations about working through this, and I know this will hit some of them hard too.

It might also mean some scholarly pruning. It might mean turning off a few ideas to focus more on the basic cognitive science that’s most important to me.

Congratulations to everyone who got good news this month. Successful grants, acceptance into med school, hired, or published. Success was earned. And for those of us getting bad news: accept it, deal with it, and progress.

Now enjoy the weekend everyone.

 

Does This Project Bring Me Joy?

 

I have too many research projects going on.

It’s great to be busy, but I’m often overwhelmed in this area. As a university professor, some of my job is well defined (e.g. teaching) but other parts not so much. My workload is divided into 40% research, 40% teaching, and 20% service. Within each of these, I have some say as to what I can take on. I can teach different classes and volunteer to serve on various committees. But the research component is mine. This is what I really do. I set the agenda. I apply for funding. This is supposed to be my passion.

So why do I feel overwhelmed in that area?

I think I have too many projects going on. And I don’t mean that I am writing too many papers. I’m most certainly not doing that. I mean I have too many different kinds of projects. There are several projects on psychology and aging, projects on the brain electrophysiology and category learning, a project on meditation and wellbeing in lawyers, a project on patient compliance, a project on distraction from smartphones, plus 4-5 other ideas in development, and at least 10 projects that are most charitably described as “half baked ideas that I had on the way home from a hockey game”.

Add to this many projects with students that may not quite be in my wheelhouse, but are close and that I’m supervising. And I’ll admit, I have difficulty keeping these things straight. I’m interested in things. But when I look at the list of things, I confess I have a tough time seeing a theme sometimes. And that’s a problem as it means I’m not really fully immersed in any one project. I cease to be an independent and curious scientist and become a mediocre project manager. And when I look at my work objectively, more often than not, it seems mediocre.

Put another way, sometimes I’m always really sure what I do anymore…

So what should I do about this, other than complain on my blog? I have to tidy up my research.

A Research Purge

There is a very popular book called “The Life Changing Magic of Tidying Up“. I have not read this book, but I have read about this book (and let’s be honest that’s sometimes the best we can do). The essence of the approach is that you should not be hanging on to things that are not bringing you joy.

Nostalgia is not joy.

Lots of stuff getting in the way is not joy. And so you go though things, one category at a time, and look at each thing and say “does this item spark joy“? If the answer is no, you discard it. I like this idea.

If this works for a home or a room…physical space…then it should work for the mental space of my research projects. So I’m going to try this. I thought about this last year, but never quite implemented it. I should go through each project and each sub project and ask “Does this project bring me joy?” or “Is there joy in trying to discover this?” Honestly, if the answer is “no” or “maybe” why should I work on it? This may mean that I give up on some things and that some possible papers will not get published. That’s OK, because I will not be compelled to carry out research and writing if it is not bringing me joy. Why should I? I suspect I would be more effective as a scientist because I will (hopefully) focus my efforts on several core areas.

This means, of course, that I have to decide what I do like. And it does not have to be what I’m doing. It does not have to be what I’ve done.

The Psychology of the Reset

Why do we like this? Why do people want to cleanse? To reset. To get back to basics? It seems to be the top theme in so many pop-psych and self help books. Getting rid of things. A detox or a “digital” detox. Starting over. Getting back to something. I really wonder about this. And although I wonder why we behave this way, I’m not sure that I would not find joy in carrying out a research study on this…I must resist the urge to start another project.

I’m going to pare down. I still need to teach, and supervise, and serve on editorial boards, etc: that’s work. I’m not complaining and I like the work. But I want to spend my research and writing time working on projects that will spark joy. Investigating and discovering things that I’m genuinely curious about…curious enough to put in the hours and time to do the research well.

I’d be curious too, to know if others have tried this. Has it worked? Have you become a better scholar and scientists by decluttering your research space?

Thanks for reading and comments are welcome.

Inspiration in the Lab

I run a mid sized cognitive psychology lab: it’s me as the PI, 2 PhD students 3 master’s students and a handful of undergraduate honours students and RAs. We are a reasonably productive lab, but there are times when I think we could be doing more in terms of getting our work out and also coming up with innovative and creative ideas.

Lately I’ve been thinking of ways to break out of our routines. Research, in my opinion, should be a combination of repetition (writing, collecting data, running an analysis in R) but also innovation where we look at new techniques, new ideas, new explanations. How to balance these?  Also, I want to increase collaborative problem solving in my lab. Often a student has a data set and the most common process is the student and I working together, or me reviewing what she or he has done. But sometimes it would be great if we’re all aware of the challenges and promises of each other’s work. We have weekly lab meetings, but that’s not always enough.

What follows are some ideas I’d like to implement in the near future. I’d love to hear what works (and does not work) from other scientists.

An Afternoon of Code

We rely on software (PsychoPy, Python, R, and E-Prime) to collect behavioural data. We have several decent programs to run the experiments we want to run, but that is often a bottleneck, and all of us sometime struggle to translate ideas into code. One way to work on this might be to have a coding retreat or an afternoon of coding. We all agree to meet in my lab and we work on shared task or designing a paradigm that we’ve never used before. I’d put up a prize for the first student to solve the problem. As an example, I’m looking to get a version of the classic “weather prediction task“. We might agree to spend a day working on this, maybe each on our own program, but at the same time so we can share ideas.

Data Visualization and Analysis

Similar to the idea above, I am thinking of ways to improve our skills on R-Studio. One idea might be to have a set of data from the most recent study in our lab and we spend a day working together on R-Studio to explore different visualizations, techniques for parsing, etc. We each know different things and R allows for so much customization, but it would be helpful to be aware of each other’s skill set.

Writing at the Pub

Despite some of its limitations, I’ve been using Google Docs as a way to prepare manuscripts for publication. It’s not much worse than Word but really allows for better collaborative work and integrates smoothly with #Slack. With the addition of Paperpile, it’s a very competent document preparation system. So I thought about setting aside a few hours in the campus pub, bring our laptops, and all write together. Lab members that are working together on a paper can write simultaneously. Or we might pick one paper, and even grad students who are not authors per se would still be able to help with edits and idea. Maybe start with coffee/tea…then a beer or two.

Internal Replications

I’ve also thought about spending some time designing and implementing replications of earlier work. We already do this to some degree, but I have many published studies from 10 or more years ago that might be worth revisiting. I thought of meeting once every few month with my team to look at these and pick one to replicate. Then we work as a team to try to replicate the study as if it were someone else’s work (not ours) and run a full study. This would be done along side the new/current work in our lab.

Chefs learn by repeating the basic techniques over and over again until they master them and can produce a simple dish perfectly each time. I can think of no reason not to employ the same technique in my lab. I think the repetitive, inward focused nature of a task like this might also lead to new insights as we rediscover what led up to design a task or experiment in a certain way.

Conclusion

I am planning on taking these ideas to my trainees at a one of our weekly lab in the next few weeks. My goal is to just try a few new things to break up the routine. I’d welcome any comments, ideas, or suggestions.