Category Archives: Academia

The Unbearable Sameness of Online Meetings

Have you been working from home since March? Are you enjoying it or are you missing your old workplace? Are you are also starting to notice a monotony that seems to lead to mild memory confusion? I am. In this post, I want to explore how and why doing everything online might make it harder to keep things straight.

In 2020, many of us learned to work from home. The novel coronavirus that caused COVID-19 also caused a shift in how a lot of us worked. Across the world, many teachers, tech workers, knowledge workers, people in media, and people in business began working from home and began holding meetings on video meeting platforms like Zoom, Skype, or MS Teams. For many of us, it represented a significant shift in how we did our work, even though much of the content work stayed the same.

At first this was as novel as anything else. I liked writing from home, and I converted the spare bedroom into an office.

My converted spare room home office and near-constant companion, Peppermint the cat.

Great, I thought. We’ll get through this. More than 7 months later I’m not sure. There have been more than enough essays on Zoom fatigue, the challenges of spotty internet, tips for better meetings, and Zoom etiquette. I want to talk about something else. The unbearable sameness of online meetings and its effects on my memory.

Zoom Takes Over

Video meetings have been around for a while in academia, but the near total reliance on them in 2020 was unprecedented. We use our knowledge of the past to help guide our behaviour in new situations. But for this, I had few prior memories available to guide me. What I did have to guide me were my usual routines, like weekly lab meetings and weekly advisory meeting with my students. So that’s how I began to structure my online day. It was similar to my pre-pandemic workday, just using video meetings in place of face-to-face meeting.

I began to teach online using Zoom for student meetings and recording lecture videos. I meet weekly with my graduate students online on Zoom. We hold weekly lab meetings on Zoom. We hold department meetings on Zoom. We have PhD defences and masters thesis defences on Zoom. There are formal Zoom talks and informal zoom coffee breaks. Some people even have Zoom happy hours. Even academic conferences, which have long been a way for academics, researchers, students and scientists to come together from different locations switched to online formats. Soon, I was doing all my work—all my teaching, research, committee work, and mentoring—from the same screen on the same computer in the same room.

Memory Errors

Although a lot of my research and teaching work is able to carried out at home and online, I began to notice some small changes. Not just general fatigue, though that’s also a concern. I was making more simple memory errors (more than usual). For example, I might be talking with one student for 10 minutes about the wrong project. Or I might confuse one meeting for another. A lot of these mistakes were source memory errors. I remembered the student, a topic, and the meeting, but confused which one was which. I was more like the stereotype of the “absent minded professor” than I used to be.

Then I realized a possible source for the problem: Everything looked the same. I was looking at the same screen on the same computer in the same room for everything. This was not typical. For my entire career as an academic, there have always been different places for different activities. I would lecture in a lecture hall or classroom. I would hold seminars in a small discussion room. I would meet with students in my office. I would meet with colleagues at the café on campus. Committee meetings were usually held in meeting rooms and board rooms. I would work on data analyses in my office. I would usually write at home or sometimes in a local café. Different places for different tasks. But now, all the work was in one place. Teaching, research, writing, and advising were all online. And worse, it all looked the same. It was all on the same screen, on Zoom, and in my home office. I no longer had the variety of space, time, location, and context to create a varied set of memory cues.

Location Based Memory

Memory is flexible and our memory depends on spreading activation to activate similar memories. In some cases, local context can be a strong and helpful memory cue. If you encode some information in one context, you will often remember that information better in the same context. Memory retrieval depends on a connection between the cues that were present at encoding and the cues that were present at retrieval. This is how we know how to adjust our behaviour in different contexts.

We react to locations all the time. When you walk into a restaurant or diner, you probably adjust your behaviour. If you are returning to a restaurant that you were at years ago, you will remember having been there before. Students behave differently in class than out of class. Being in specific place helps you remember things that you associated with that place. This is all part of our natural tendency to remember things where and when they are likely to matter most.

But it seemed like this natural tendency was working against me. Each day began and ended at the desk in my home office. Each day I was in the same location when I taught, wrote, met, and carried out analyses. But this was also the same location where I read the news, caught up on Twitter, and ordered groceries online. What I noticed in my new forgetfulness was that I was experiencing memory interference. Everything was starting to look the same. The contextual cues that would normally be a helpful reminder of what I was doing were no longer working as memory cues because they were the same cues for everything and everyone. When everything looks the same, context is no longer a helpful memory cue. If you work, meet, read, write, shop, and casually read the news in exactly the same place, the likelihood that you will make an error of confusion is increased. It’s not that I am forgetting things; It’s just that I am not always remembering the right things

What (if any) solutions?

This is not an easy problem to solve, of course, because as long as COVID is ascendant, I will still have to work from home. But I am trying a few things. One simple fix might be to vary my approach to video meetings. It might help to change platforms in a consistent way, say by meeting with one working group on MS Teams and another on Zoom. It’s not as strong of a difference as meeting in different rooms, but it’s still a change of venue. Another way to accomplish the same goal is to simply change the appearance of my computer each time you meet with a person, use different backgrounds for different people, or light view for “work”, dark view for “home”. These seem like very small things and they might not fix the problem entirely, but they could help.

I thought about working in my university office, maybe once a week, and do my Zoom-based grad advisory meetings that way. Maybe that will help create a good context cue. Though I have to say, I like my home office setup with my cat, no driving commute, and unlimited coffee. The other issue with working on campus is that if we have another lockdown, I have to start over again from home. I really want to make it work from home. I need to find new ways to work, not try to work the old way.

I’m willing to try different things to help bring some better sense of structure to my online life. Like it or not, this is how a lot of us are working now and for the foreseeable future.

I would be interested in hearing your suggestions and other ideas.

The Scientific Workflow: A Guide for Psychological Science

When new students or postdocs enter into your lab, do you have a plan or a guide for them? I have a lab manual that explains roles and responsibilities, but I did not (until recently) have a guide for how we actually do things. So I’ve made it my mission in 2019 and 2020 to write these things down and keep them updated. The Lab Manual (see above) is about roles and responsibility, mentorship, EDI principles, and lab culture.

This current guide, which I call the Scientific Workflow, is my guide for doing psychological science.  I wrote this to help my own trainees after a lab meeting last where we discussed ideas around managing our projects. It started as a simple list, and I’m now making it part of my lab manual. You can find a formatted version here, and the LaTex files here.

IMG_20191029_075744

Nothing related to science here, but a beautiful picture of campus from our research building

Introduction

This is my guide for carrying out cognitive psychology and cognitive science research in my lab. The workflow is specific to my lab, but can be adapted. If you think this is helpful, please feel free to share and adapt for your own use. You can keep this workflow in mind when you are planning, conducting, analyzing, and interpreting scientific work. You may notice two themes that seem to run throughout the plan: documenting and sharing. That’s the take home message: Document everything you do and share your work for feedback (with the group, your peers, the field, and the public). Not every project will follow this outline, but most will.

Theory & Reading

The first step is theory development and understanding the relationship of your work to the relevant literature. For example, my research is based in cognitive science and I develop and test theories about how the mind forms concepts and categories. My lab usually works from two primary theories. 1) Prototype / exemplar theory , which deals with category representations; and 2) multiple systems theory (COVIS is an example) which addresses the category learning process and rule use. I follow these topics on line and in the literature.

Screen Shot 2020-01-18 at 6.49.03 AM

Paperpile is a great way to organize, annotate and share papers. See my article here.

You should keep up with developments in the field using Google Scholar alerts and its recommendations. I check every week and I recommend that you do as well. We want to test the assumptions of these theories, understand what they predict, test their limitations and contrast with alternative accounts. We’re going to design experiments that help understand the theory, the models, and make refinements and/or reject some aspects of our theories.

  • Use Google Scholar to find updates that are important for your research.
  • Save papers in Paperpile (or Zotero) and annotate as needed.
  • Document your work in Google Docs (or another note taking app).
  • Share interesting papers and preprints with the whole lab group in the relevant channel(s) in Slack.

Hypotheses Generation

Hypotheses are generated to test assumptions and aspects of the theory and to test predictions of other theories. The hypothesis is a formal statement of something that can be tested experimentally and these often arise from more general research questions which are often broad statements about what you are interested in or trying to discover. You might arrive at a research question or an idea while reading a paper, at a conference, while thinking about an observation you made, or by brainstorming in an informal group or lab meeting.

IMG_20190529_095011

A lab meeting with my student learning fNIRS

Notice that all of these assume you that put in some time and effort to understanding the theory and then allow some time to work over ideas in your mind, on paper, or in a computer simulation.

  • Work on hypothesis generation in lab meetings, our advisory meetings, and on your own.
  • Document your thoughts in Google Docs (or your own notes on paper, OneNote or Evernote).
  • Share insights in lab meetings and in the relevant channel in Slack.

Design the Study/Experiment

Concurrent with hypothesis generation is experimental design. In most case, we are designing experiments to test hypotheses about category representation and category learning and/or the predictions of computational models. We want to test hypothesis generated from theories and also carry out exploratory work to help refine our theories. Avoid the temptation to put the cart before the horse and come up with experiments and studies that will produce an effect for its own sake. We don’t just want to generate effects.

The design comes first. Consider the logic of your experiment, what you plan to manipulate, and what you want to measure. Avoid the temptation to add in more measures than you need, just to see if there’s an effect. For example, do you need to add in 2-3 measures of working memory, mood, or some demographic information just to see if there’s an effect there? If it’s not fully justified, it may hurt more than help because you have non-theoretically driven measures to contend with. I’ve been guilty of this in the past and it always comes back to haunt me.

  • Work on experiment generation in lab meetings, advisory meetings, on your own.
  • Document your work and ideas in Google Docs or a note taking app that you can share.
  • Use G*Power to estimate correct sample size.
  • Use PsychoPy or Qualtrics to build your experiment.
  • Test these experiment protocols often, on your self, on lab mates, on volunteers
  • Develop a script for research assistants who will be helping you carry out the study.
  • Share insights in lab meetings and in the relevant channel in Slack.
  • Organize tasks and chores in the relevant Trello board for your project.

Analysis Plan & Ethics Protocol

This is where we start to formalize things. An analysis plan will link together the hypothesis and the experimental design with the dependent variables and/outcome measures. In this plan, we’ll describe and document how the data will be collected, visualized, analyzed, stored and shared. This plan should describe how we will deal with outlier data, missing data, data from participants who did not complete the experiment correctly, experimenter error, malfunction, etc. This plan can include tentative predictions derived from a model and also a justification of how we intend to analyze and interpret the data. This plan can also be pre-registered with OSF, which is where we’ll plan to share the data we collect with the scientific community.

At the same time we also want to write an ethics protocol. This is a description of our experiment, the research question, and procedures for the University REB. This will also include standardized forms for information and consent, a policy for recruitment, subject safety, data storage and security. The REB has templates and examples, and our lab Slack channel on ethics can include examples as well. Use templates when ever possible.

Both of these documents, the analysis plan and the ethics protocol, should describe exactly what we are doing and why we are doing it. They should provide enough information that someone else would be able to reproduce our experiments in their own lab. These documents will also provide an outline for your eventual method section and your results section.

  • Document your analysis plan and ethics protocol work in Google Docs.
  • Link these documents to the project sheet or Trello board for your project.
  • Share in the relevant channel in Slack.

Collect Data

Once the experiment is designed, the stimuli have been examined, we’re ready to collect data or to obtain data from a third party (which might be appropriate for model testing). Before you run your first subject, however, there are some things to consider. Take some time to run yourself through every condition several times and ask other lab members to do the same. You can use this to make sure things are working exactly as you intend, to make sure the data are being saved on the computer, and to make sure the experiment takes as long as planned.

When you are ready to collect data for your experiment:

  • Meet with all of your research volunteers to go over the procedure.
  • Book the experiment rooms on the Google Calendar.
  • Reserve a laptop or laptops on the Google Calendar.
  • Recruit participants though SONA or flyers.
  • Prepare the study for M-Turk or Prolific
  • Use our lab email for recruitment.

After you have run through your experiment several time, documented all the steps, and ensured that the everything is working exactly as you intended, you are ready to begin. While you are running your experiment:

  • Document the study in Google Docs, Trello, and/or Slack (as appropriate)
  • Make a note of anything unusual or out of the ordinary for every participant in a behavioural study.
  • Collect signatures from participants if you are paying them.
  • Data should stored in text files that can be opened with Excel or Google sheets or imported directly into R. Be sure these are linked to the project sheet.
  • Make sure the raw data are labelled consistently and are never altered.
  • Be sure to follow the data storage procedures outlined in the ethics protocol.

Data Management

Your data plan should specify where and how to store your data. While you are collecting data you should be working on a script in R (or Python) to extract and summarize the raw data according to your plan. When you reach the planned sample size, ensure that all of that data are secure and backed up and do an initial summary with your R script.

As you work on summarizing and managing your data:

  • Make notes in the project sheet and/or Trello board about where the data are stored
  • Document your steps in an R Notebook (or Python Notebook).

Plots & Stats

Remember the photo of Dr. Katie Bouman, then a postdoc, when she first saw the rendering of the first photos of a black hole that her algorithms generated? That’s the best part of science: seeing your data visualized for the first time. When you have completed your experiment and taken care of the data storage and basic processing, it’s time to have fun and see what you discovered. The analysis plan is your guide and your analysis plan describes how you want to analyze the data, what your dependent variables are, and how to conduct statistical test with you data to test the hypothesis. But before you do any statistics, work on visualizing the data. Use your R notebook to document everything and generate boxplots, scatter plots, or violin plots to see the means, medians, and the distribution for the data.

Because you are using R Notebooks to do the analysis, you can write detailed descriptions of how you created the plot, what the plot is showing, and how we should interpret the plot. If you need to drop or eliminate a subject’s data for any reason, exclude them in from data set in R, do not delete the data from the raw data file. Make a comment in the script of which subject was dropped and why. This will be clear and transparent.

You can also use R to conduct the tests that we proposed to use in the analysis plan. This might be straightforward ANOVA or t-test, LME models, regression, etc. Follow the plan you wrote, and if you deviate from the plan, justify and document that exploratory analysis.

If you are fitting a decision boundary model to your data, make sure you have the code for the model (these will be on my GitHub) and you should do your modelling separately from the behavioural analysis. The GLM models are saved as R scripts but you should copy or fork to your R-Notebooks for your analysis so you can document what you did. Make sure that you develop the version for your experiment and that the generic model is not modified.

If you are fitting a prototype or exemplar model, these have been coded in Python. Use Python 3 and a basic text editor or JupyterLab. JupyterLab might be better as it’s able to generate markdown and reproducible code like R Notebooks. Or just call python from R Studio.

  • Follow your analysis plan.
  • Consult with me or your peers if you notice any unusual patterns with anything.
  • Make notes in the project sheet and/or Trello board about what analyses you’ve completed.
  • Document your steps in an R Notebook (or Python Notebook).
  • If you drop a participant for any reason, indicate in the comments of your R script (or other notes). We want this information to recorded and transparent.

Present and Explain Your Work

While you working on your analysis, you should present the interim work often in lab meetings for the rest of the group and we can discuss the work when we meet individually. The reason to present and discuss often is to keep the ideas and work fresh in your mind by reviewing manageable pieces of it. If you try to do too much at once, you may miss something or forget to document a step. Go over your work, make sure its documented, and then work on the new analyses, and repeat. You should be familiar with your data and your analysis so that you can explain it to yourself, to me, to your peers, end eventually anyone who reads your paper.

Use the following guidelines for developing your work:

  • Make your best plots and figures.
  • Present these to the lab on a regular basis.
  • Use RPubs to share summary work instantly with each other and on the web
  • Keep improving the analysis after each iteration.
  • You should always have 8-10 slides that you can present to the group.
  • Document your work in R Notebooks, Google Docs, Trello, and Google Slides.

Write Papers Around This Workflow

The final step is to write a paper that describes your research question, your experimental design, your analysis, and your interpretation of what the analysis means. A scientific paper, in my opinion has two important features:

  1. The paper should be clear and complete. That means it describes exactly what you wanted to find out, how and why you designed your experiment, how you collected your data, how you analyzed your data, what you discovered, and what that means. Clear and complete also means that it can be used by you or by others to reproduce your experiments.
  2. The paper should be interesting. A scientific paper should be interesting to read. It needs to connect to a testable theory, some problem in the literature, an unexplained observation. It is just as long as it needs to be.

I think the best way to generate a good paper is to make good figures. Try to tell the story of your theory, experiment, and results with figures. The paper is really just writing how you made the figures. You might have a theory or model that you can use a figure to explain. You can create clear figures for the experimental design, the task, and the stimuli. Your data figures, that you made according to you analysis plan, will frame the results section and a lot of what you write is telling the reader what they show, how you made them, and what they mean figures. Writing a scientific paper is writing a narrative for your figures.

Good writing requires good thinking and good planning. But if you’ve been working on your experiment according to this plan, you’ve already done a lot of the thinking and planning work that you need to do to write things out. You’ve already made notes about the literature and prior work for your introduction. You have notes from your experimental design phase to frame the experiment. You have an ethics protocol for your methods section and an analysis plan for your results. You’ll need to write the discussion section after you understand the results, but if you’ve been presenting your 8-10 slides in lab meeting and talking about them you will have some good ideas and the writing should flow. Finally, if you’ve been keeping track of the papers in Paperpile, your reference section should be easy.

Submit the paper

The final paper may have several experiments, each around the theme set out in the introduction. It’s a record of what we did, why we did it, and how. The peer reviewed journal article is the final stage, but before we submit the paper we have a few other steps to ensure that our work roughly conforms to the principles of Open Science, each of which should be straightforward if we’ve followed this plan.

  • Create a publication quality preprint using the lab template. We’ll host this on PsyArXiv (unless submitting a blind ms.)
  • Create a file for all the stimuli or materials that we used and upload to OSF.
  • Create a data archive with all the raw, de-identified data and upload to OSF.
  • Upload a clean version of your R Notebook that describe your analyses and upload to OSF.

The final steps are organized around the requirements of each journal. Depending on where we decide to submit our paper, some of these may change. Some journals will insist on a word doc file, others will allow for PDF. In both cases, assume that the Google Doc is the real version, and the PDF or the .doc files are just for the journal submission. Common steps include:

  • Download the Google Doc as a MS Word Doc or PDF.
  • Create a blind manuscript if required.
  • Embed the figures if possible otherwise place at the end.
  • Write a cover letter that summarizes paper and why we are submitting.
  • Identify possible reviewers.
  • Write additional summaries as required and generate keywords.
  • Check and verify the names, affiliations, and contact information for all authors.
  • Submit and wait for 8-12 weeks!

Conclusion

As I mentioned at the outset, this might not work for every lab or every project. But the take home message–document everything you do and share your work for feedback–should resonate with most science and scholarship. Is it necessary to have a formal guide? Maybe not, though I found it instructive for me as the PI to write this all down. Many of these practices were already in place, but not really formalized. Do you have a similar document or plan for your lab? I’d be happy to hear in the comments below.

Use Paperpile’s Annotation System

Reading scientific papers as PDFs is a major part of being an academic. Professors, postdocs, grad students and undergraduates end up working with PDFs, making notes, and then using those notes to write a manuscript or paper. Although there are lots of great PDF viewers and reference managers, I use Paperpile, which is cloud-based PDF manager and was originally designed to be a reference manager for Google Docs. It can sync all your PDFs with your Google Drive (so you can read them offline) and neatly integrates with Google Scholar and Chrome so that you can import references and PDFs from anywhere. It handles citations in Docs and in Word and has a beta app for iPad that is brilliant.

We use this in my lab all the time. It’s a paid app, but it is not very expensive, they have education pricing and as the lab PI, I just pay for a site license for all my trainees.

Making Notes

One of the best features is the notes-taking and annotation system. Like most PDF viewers, the built in PDF viewers lets you highlight, markup, and annotate PDFs with sticky notes. These annotations stay with the PDF and will sync across devices because it’s cloud based. Just like in Adobe or Apple Preview, you can highlight, add notes, use strike-thru, or even draw with a pencil. Paperpile organizes these well and makes it easy to navigate. And if you use the iPad app you can make notes there that will show up in the browser and notes in the browser will show up on the iPad. The icon on the upper right hides your notes.

Screen Shot 2020-01-18 at 6.49.03 AM

Exporting and Sharing

If you’re reading along, taking notes and making highlights, you may want to share these with someone or use them in a manuscript (or even a manuscript review). There are several ways to do this.

Export the PDF

The File menu lets you print with or without annotations. If you want to send the PDF to someone as a clean PDF without your notes, that’s easy to do. Or it will save your notes in the new PDF.

Screen Shot 2020-01-18 at 6.49.58 AM

The exported PDF opens in other PDF viewers with your notes intact and editable (Apple Preview is shown below). This is great to share with someone who does not use Paperpile. Of course, you can print a clean PDF without the annotations.

Screen Shot 2020-01-18 at 6.51.12 AM

Export the annotations only

If you planning to write an an annotated bibliography, a critical review, a meta analysis, a paper for class or even a manuscript review for a journal, the ability is export the notes is invaluable. Using the same File menu, you will see the “export” option. This lets you export just your notes and highlights in several formats. If you want it to share on line, for examples, try the HTML option. This is great if you are writing a blog and wanted to include screen shots and notes. Notice that this keeps the annotations (notes, images, highlights) on the right and data about who made the notes on the left. Helpful if more than one person are making notes.

Screen Shot 2020-01-18 at 7.10.12 AM And of course, if you’re using this annotation tool to make notes for your own paper or a manuscript review, you can export just your notes as text or markdown and open in Google Docs, Word, or any editor and use those to help frame your draft. You have the contents of the notes as text and can quote highlighted text. Images are not saved, of course.

Conclusion

In my opinion, Paperpile is the best reference manager and PDF manager on the scene. Others, like Zotero, Mendeley, and Endnote are also good (and Zotero is free, of course). Each has things they do really well, but if you already use Paperpile, or are curious about it, I strongly suggest you spend some time with the PDF viewer and annotations. It’s really changed my workflow for the better. It’s just such well designed software.

 

Comments, corrections, and suggestion are always welcome.

Mindful University Leadership

Academia, like many other sectors, is a complex work environment. Although universities vary in terms of their size and objectives, the average university in the United States, Canada, UK, and EU must simultaneously serve the interests of undergraduate education, graduate education, professional education, basic research, applied research, public policy research, and basic scholarship. Most research universities receive funding for operation from a combination of public and private sources. For example, my home university, The University of Western Ontario, receives its operating funds from tuition payments, governments, research funding agencies, and from private donors. Many other research universities are funded in similar ways, and most smaller colleges are as well.

IMG_20180728_204415-EFFECTS

Looking west over Lake Erie, Port Stanley, Ontario

Faculty are at the center of this diverse institution, acting as the engine of teaching, research, and service. As a result, faculty members may find themselves occasionally struggling to manage these different interests. This article looks at the challenges that faculty members face, paying particular attention to the leadership role that many faculty play. I then explore the possible ways in which a mindfulness practice can benefit faculty well-being and productivity.

Challenges of Leadership in the University Setting

Although many work environments have similar challenges and issues (being pulled in different directions, time management, etc.) I want to focus on the challenges that faculty members face when working at and leading the average, mid-sized or large university. The specific challenges will vary in terms of what role or roles a person is serving in, but let’s first look at challenges that might be common to most faculty members.

Challenge 1: Shifting tasks

“Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things. What I do takes long hours of studying and uninterruptible concentration.” — Donald Knuth

I love this quote from Donald Knuth, a professor of computer science, because it encapsulates the main challenge that so many of us have. We want to be on top of things (teaching, questions from students, cutting-edge research) but we also want to be on the bottom: digging deeply into a problem and finding a solution.

The average faculty member has, at a minimum, 2–3 very different kinds of jobs. We’re teachers, researchers/scholars, and we also help to run the university. Within these broadly-defined categories, we divide our teaching time between graduate and undergraduate teaching and mentorship. Research involves investigation, applying for grants, reading, investigation, analysis, writing, dissemination. And running the university can make us managers, chairs, deans, and provosts and as such, we’re responsible for hiring research staff, hiring other faculty members, and managing budgets.

These three categories require different sets of skills and shifting between them can be a source of stress. In addition, the act of shifting between them will not always go smoothly and this may result in a loss of effectiveness and productivity as the concerns from one category, task, or role bleed into another. Being mindful of the demands of the current task at hand is crucial.

For example, I find it especially difficult to transition after 2–3 hours of leading a seminar or lecture. Ideally, I would like to have some time to unwind. But many times, I also need to schedule a meeting in the afternoon and find that I have only a short amount of time to go from “lecture mode” into “meeting mode”. Worse, I might still be thinking about my lecture when the meeting begins (this is an even bigger challenge for me in 2020, because nearly everything is online, on Zoom, from my home office). Even among university leaders that have little or no direct teaching requirements, it is common to have to switch from and to very different topics. One day you might start the day answering emails (with multiple topics), a morning meeting on hiring negotiations, a meeting about undergraduate planning, then an hour with a PhD student on a very specific and complex analysis of data for their dissertation research, followed by a phone call from the national news outlet asking about the research of one of your faculty members. Shifting between these tasks can reduce your effectiveness. The cognitive psychology literature refers to this as “set shifting” or “task-shifting”, and research has supported the idea that there is always a cost to shift (Arrington & Logan, 2004; Monsell, 2003).  These costs will eventually affect how well you do your job and also how you deal with stress. It’s difficult to turn your full attention to helping your student with an analysis when you are also thinking about your department’s budget.

As academics, we switch and shift tasks throughout the day and throughout the week. The primary challenge in this area is to be able to work on the task at hand and to be mindful of distractions. Of course, they will occur, but through practice, it may be possible to both minimize their impact and also reduce the stress and anxiety associated with the distractions.

Challenge 2: Shared governance

One aspect of academia that sets it apart from many corporate environments is the notion of “shared governance”. Though this term is common (and has been criticized as being somewhat empty,) the general concept is that a university derives its authority from a governing board, but that faculty are also vested in the institutional decision-making process. This means that most universities have a faculty senate that sets academy policy, dean’s level committees that review budgets and programs, and departmental committees that make decisions about promotion and tenure, hiring, and course assignments.

From a leadership perspective, this can mean that as a chair or dean you are always managing personal, balancing the needs of faculty, students, budgets, senior administrators, and the public image of your university. There may not be a clear answer to the question of “who is the boss?”  Sometimes faculty are asked to assume leadership roles for a set time and will need to shift from a collegial relationship to a managerial one (then back to a collegial one) for the same people. That is, one day you are colleagues and the next you are his or her supervisor.

The challenge here is to understand that you may be manager, colleague, and friend at the same time. In this case, it’s very helpful to be mindful of how you interact with your colleagues such that your relationship aligns with the appropriate role.

Challenge 3: Finding time for research and scholarship

One of the most common complaints or concerns from faculty is that they wish they had more time for research. This is a challenge for faculty as well as leaders. Although a common workload assumes that a faculty member may spend 40% of their time on research, most faculty report spending much of their time in meetings. However, promotion and tenure is earned primarily through research productivity. Grants are awarded to research productive faculty. That is, most of those meetings are important, but do not lead to promotion and career advancement. This creates a conflict that can cause stress because although 40% is the nominal workload, it may not be enough to be research productive. Other aspects of the job, like meetings related to teaching and service, may take up more than their fair share but often feel more immediate.

In order to be effective, academic leaders also need to consider these concerns from different perspectives. For example, when I was serving as the department chair for a short period, I had to assigned teaching to our faculty. There are courses that have to be offered and teaching positions that have to be filled. And yet my colleagues still need to have time to do research and other service work. These can be competing goals and they affect different parts of the overall balance of the department. The department chair needs to balance the needs of faculty to have adequate time for research with the needs of the department to be able to offer the right amount of undergraduate teaching. So not only is it a challenge to find time to do one’s own research, a department chair also needs to consider the same for others. Being mindful of these concerns and how they come into conflict is an important aspect of university leadership.

Considering these diverse goals and trying to meet them requires a fair degree of cognitive flexibility and if you find yourself being pulled to think about teaching, about meetings, and about the workload of your colleagues, it is going to pull you away from being able to be on top of your own research and scholarship. The primary challenge in this area is to create the necessary cognitive space for thinking about research questions and working on research.

Mindfulness and Leadership

I’ve listed three challenges for leaders in an academic setting: switching, shared governance, and finding time for research. There are more, one course, but let’s stick with these. I want to now explain what mindfulness practice is and how it might be cultivated and helpful for academic leaders. That is, how can mindfulness help with these challenges?

What is mindfulness?

A good starting point for this question is a definition that comes from Jon Kabat-Zinn’s work. Mindfulness is an open and receptive attention to, and awareness of what is occurring in the present moment. For example, as I’m writing this article, I am mindful and aware of what I want to say. But I can also be aware of the sound of the office fan, aware of the time, aware that I am attending to this task and not some other task. I’m also aware that my attention will slip sometimes, and I think about some of the challenges I outlined above. Being mindful means acknowledging this wandering of attention and being aware of the slips but not being critical or judgmental about my occasional wavering. Mindfulness can be defined as a trait or a state. When described as a state, mindfulness is something that is cultivated via mindfulness practice and meditation.

How can mindfulness be practiced?

The best way to practice mindfulness is just to begin. Mindfulness can be practiced alone, at home, with a group, or on a meditation retreat. More than likely, your college or university offers drop in meditation sessions (as mine does). There are usually meditation groups that meet in local gyms and community centers. Or, if you are technologically inclined, the Canadian company Interaxon makes a small, portable EEG headband called MUSE that can help develop mindfulness practice (www.choosemuse.com). There are also excellent apps for smartphones, like Insight Timer.

The basic practice is one of developing attentional control and awareness by practicing mindfulness meditation. Many people begin with breathing-focused meditation in which you sit (in a chair or on a cushion) close your eyes, relax your shoulders and concentrate on your breath. Your breath is always there, and so you can readily notice how you breath in and out. You notice the moment where your in-breath stops and your out-breath begins. This is a basic and fundamental awareness of what is going on right now. The reason many people start with breathing-focused meditation is that when you notice that your mind begins to wander, you can pull your attention back to your breath. The pulling back is the subtle control that comes from awareness and this is at the heart of the practice. The skill you are developing with mindfulness practice is the ability to notice when your attention has wandered, not to judge that wandering, and to shift your focus back to what is happening in the present

Benefits of mindfulness to academic leaders

A primary benefit of mindfulness involves learning to be cognitively and emotionally present in the task at hand. This can help with task switching. For example, when you are meeting with a student, being mindful could mean that you bring your attention back to the topic of the meeting (rather than thinking about a paper you have been working on). When you are working on a manuscript, being mindful could mean keeping your attention on the topic of the paragraph and bringing it back from other competing interests. As a researcher and a scientist, there are also benefits as keeping an open mind about collected data and evidence which can help to avoid cognitive pitfalls. In medicine, as well as other fields, this is often taught explicitly as at the “default interventionist” approach in which the decision-maker strives to maintain awareness of her or her assessments and the available evidence in order to avoid heuristic errors. (Tversky & Kahneman, 1974) As a chair or a dean, being fully present could also manifest itself by learning to listen to ideas from many different faculty members and from students who are involved in the shared governance of academia.

Cognitive and clinical psychological research has generally supported the idea that both trait mindfulness and mindfulness meditation are associated with improved performance on several cognitive tasks that underlie the aforementioned challenges to academic leaders. For example, research studies have shown benefits to attention, working memory, cognitive flexibility, and affect. (Chambers, Lo, & Allen, 2008; Greenberg, Reiner, & Meiran, 2012; Amishi P. Jha, Stanley, Kiyonaga, Wong, & Gelfand, 2010; Amism P. Jha, Krompinger, & Baime, 2007) And there have been noted benefits to emotional well-being and behaviour in the workplace as well. This work has shown benefits like stress reduction, a reduction to emotional exhaustion, and increased job satisfaction (Hülsheger, Alberts, Feinholdt, & Lang, 2013, Nadler, Carswell, & Minda, 2020)

Given these associated benefits, mindfulness meditation has the potential to facilitate academic leadership by reducing some of what can hurt good leadership (stress, switching costs, cognitive fatigue) and facilitating what might help (improvements in attentional control and better engagement with others).

Conclusions

As I mentioned at the outset, I wrote this article from the perspective of a faculty member at large research university, but I think the ideas apply to higher education roles in general. But it’s important to remember that mindfulness is not a panacea or a secret weapon. Mindfulness will not make you a better leader, a better teacher, a better scholar, or a better scientist. Mindful leaders may not always be the best leaders.

But the practice of mindfulness and the cultivation of a mindful state has been shown to reduce stress and improve some basic cognitive tasks that contribute to effective leadership. I find mindfulness meditation to be an important part of my day and an important part of my role as a professor, a teacher, a scientist, and an academic leader.  I think it can be an important part of a person’s work and life.

References

Arrington, C. M., & Logan, G. D. (2004). The cost of a voluntary task switch. Psychological Science, 15(9), 610–615.

Chambers, R., Lo, B. C. Y., & Allen, N. B. (2008). The Impact of Intensive Mindfulness Training on Attentional Control, Cognitive Style, and Affect. Cognitive Therapy and Research, 32(3), 303–322.

Greenberg, J., Reiner, K., & Meiran, N. (2012). “Mind the Trap”: Mindfulness Practice Reduces Cognitive Rigidity. PloS One, 7(5), e36206.

Hülsheger, U. R., Alberts, H. J. E. M., Feinholdt, A., & Lang, J. W. B. (2013). Benefits of mindfulness at work: the role of mindfulness in emotion regulation, emotional exhaustion, and job satisfaction. The Journal of Applied Psychology, 98(2), 310–325.

Jha, A. P., Krompinger, J., & Baime, M. J. (2007). Mindfulness training modifies subsystems of attention. Cognitive, Affective & Behavioral Neuroscience, 7(2), 109–119.

Jha, A. P., Stanley, E. A., Kiyonaga, A., Wong, L., & Gelfand, L. (2010). Examining the protective effects of mindfulness training on working memory capacity and affective experience. Emotion , 10(1), 54–64.

Monsell, S. (2003). Task switching. Trends in Cognitive Sciences, 7(3), 134–140.

Nadler, R., Carswell, J. J., & Minda, J. P. (2020). Online Mindfulness Training Increases Well-Being, Trait Emotional Intelligence, and Workplace Competency Ratings: A Randomized Waitlist-Controlled Trial. Frontiers in Psychology, 11, 255.

Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131.

Open Science: My List of Best Practices

IMG_20180708_144620_Bokeh

This has nothing to do with Open Science. I just piled these rocks up at Lake Huron

Are you interested in Open Science? Are you already implementing Open Science practices in your lab? Are you skeptical of Open Science? I have been all of the above and some recent debates on #sciencetwitter have been discussing the pros and cons of Open Science practices. I decided to write this article to share my experiences as I’ve been pushing my own research in the Open Science direction.

Why Open Science?

Scientists have a responsibility to communicate their work to their peers and to the public. This has always been part of the scientific method but the methods of communication have differed throughout the years and differ by fields. This essay reflects my opinions on Open Science (capitalized to reflect this as set of principles), and I also give an overview of my lab’s current practices. I’ve written about this in my lab manual (which is also open) but until I sat down to write this essay, I had not really codified how my lab and research has adopted Open Science practices. This should not be taken as a recipe for your own science, lab, and these ideas may not apply to other fields. This is just my experience trying to adopt Open Science practices in my Cognitive Psychology lab.

Caveats First

Let’s get a few things out of the way…

First, I am not an expert in open science. In fact until about 2-3 years ago, it never even occurred to me to create a reproducible archive for my data, or to ensure that I could provide analysis scripts to someone else so that they could reproduce my analysis, or that I would provide copies of all of the items / stimuli that I used in a psychology experiment. I’ve received requests for data before, but I usually handled those in a piecemeal, ad hoc fashion. If someone asked, I would put together a spreadsheet.

Second, my experience is only generalizable to other comparable fields. I work in cognitive psychology and have collected behavioural data, survey questionnaire data, and electrophysiological data. I realized data sharing can be complicated by ethics concerns for people who collect sensitive personal or health data. I realize that other fields collect complex biological data that may not lend itself well to immediate sharing.

Finally, the principles and best practices that I’m outlining here were adopted in 2018. Some of this was developed over the course of the last few years, but this is how we are running our lab now, and how we plan to run my research lab foreseeable future. That means there are still gaps: studies that were published a few years ago that have not yet been archived, papers that may not have a preprint, analyses that were done 20 years ago in SAS on the VAX 11/780 at University at Buffalo, and if anyone wants to see data from my well-cited 1998 paper on prototype and exemplar theory, I can get it, but it is not going to be easy.

Core Principles

There are many aspects to Open Science, but I am going to outline three areas that cover most of these. There will be some overlap and some aspects may be missed.

Materials and Methods

The first aspect of Open Science concerns openness with respect to methods, materials, and reproducibility. In order to satisfy this criteria, a study or experiment should be designed and written in such a way that another scientist or lab in the same field would be able to carry out the same kind of study if they wanted to. That means that any equipment that was used is described in enough detail or is readily available. This also means that computer programs that were used to carry out the study are accessible and the code is freely available. As well, in psychology, there are often visual, verbal, or auditory stimuli that participants make decisions about or questions that they answer. These should also be available.

Data and Analysis

The second aspect of Open Science concerns open availability of data that have been collected in the study. In psychology, data takes many forms, but usually refers to responses by participants on surveys, presentation of visual stimuli, recordings of EEG, data collected in an fMRI study. In other fields, it may consist of observations taken at a field station, measurements taken of an object or substance, or trajectories of objects in space. Anything that is measured, collected, analyzed for a publication should be available for other scientists in the field.

Of course, in a research study or scientific project, the data that have been collected are also processed and analyzed. Here, several decisions need to be made. It may not always be practical to share raw data, especially if things were recorded by hand in a notebook or if the digital files are so large as to be unmanageable. On the other hand, it may not be useful to publish data that have been processed and summarized too much. For most fields, there is probably a middle-ground where the data have been cleaned and minimally processed but no statistical analyses of been done, and the data have not been transformed. The path from raw data to this minimal state should be clear and transparent. In my experience so far, this is one of the most difficult decisions to make. I don’t have a solid answer yet.

In most scientific fields, data are analyzed using software and field-specific statistical techniques. Here again, several decisions need to be made while the research is being done in order to ensure that the end result is open and usable. For example, if you analyze your data with Microsoft Excel, what might be simple and straightforward to you might be uninterpretable to someone else. This is especially true if there are pivot tables, unique calculations entered into various cells, and transformations that have not been recorded. This, unfortunately, describes a large part of the data analysis I did as a graduate student in the 1990s. And I’m sure I’m not alone. Similarly, any platform that is proprietary will present limits to openness. This includes Matlab, SPSS, SAS, and other popular computational and analytic software. I think that’s why you see so many people who are moving towards Open Science practices encouraging the use of R and Python, because they are free, openly available, and they lend themselves well to scientific analysis.

Publication

The third aspect of Open Science concerns the availability of the published data and interpretations: the publication itself. This is especially important for any research that is carried out at a university or research facility that is supported by public research grants. Most of these funding agencies require that you make your research accessible.

There are several good open access research journals that make the publications freely available for anyone because the author helps to cover the cost of publication. But many traditional journals are still behind a paywall and are only available for paid subscribers. You may not see the effects of this if you’re working in a university because your institution may have a subscription to the journal. The best solution is to create a free and shareable version of your manuscript, a preprint, that is available on the web and that anyone can access but does not violate the copyright of the publisher.

Putting this in practice

I tried to put some guidelines in place in my lab to address these three aspects of open science. I started with one overriding principle: When I submit a manuscript for publication in a peer-reviewed journal, I should also ensure that at the time of submission, I have a complete data file that I can share, analysis scripts that I can share, and a preprint.

I implemented as much of this is possible with every project paper that we’ve submitted for publication since late 2017 and all our ongoing projects. We don’t submit a manuscript until we can meet the following:

  • We create a reprint of the manuscript that can be shared via a public online repository. We post this preprint to the online repository at the same time that we submit it to the journal.
  • We create shareable data files for all of the data collected in the study described in that manuscript. These are almost always unprocessed or minimally processed data in a Microsoft Excel spreadsheet or a text file. We don’t use Excel for any summary calculations, so the data are just data.
  • As we’re carrying out the data analysis, we document our analyses in R notebooks. We share the R scripts /notebooks for all of the statistical analyses and data visualizations in the manuscript. These are open and accessible and should match exactly what appears the manuscript. In some cases, we have posted R notebooks with additional data visualization beyond what is in the manuscript as a way to add value to the manuscript.
  • We also create a shareable document for any nonproprietary assessments or questionnaires that were designed for this study and copies of any visual or auditory stimuli used in the study.

Now on this list of best practices, it would be disingenuous to suggest that every single study paper from my lab meets all of those criteria. For example, one recently published study made use of Matlab instead of Python, because that’s how we knew how to analyze the data. But we’re using these principle as a guide as out work progresses. I view Open Science and these guidelines as an important and integral part of training my students. I view this as being just as important as the theoretical contributions that we’re making to the field.

Additional Resources and Suggestions

In order to achieve this goal, the following guidelines and resources have been helpful to me.

OSF

My public OFS profile lists current and recent projects. OSF stands for “open science Framework” and it’s one of many data repositories that can be used to share data, preprints, unformatted manuscripts, analysis code, and other things. I like OSF, and it’s kind of incredible to me that thus wonderful resource is free for scientists to use. But if you work at a University or public research institute, your library probably runs a public repository as well.

Preregistration

For some studies, preregistration may be helpful, additional step in carrying out the research. There are limits to preregistration, many of which are addressed with Registered Reports. At this point, we haven’t done any register reports. The preregistration is helpful though, because it encourages the researcher student to lay out a list of analyses they plan to do, to describe how the data are going to be collected, and to make that plan publicly available before the data are collected. This doesn’t mean that preregistered studies are necessarily better, but it’s one more tool to encourage openness in science.

Python and R

If you’re interested in open science it really is worth looking closely at R and Python for data manipulation, visualization, and analysis. In psychology, for example, SPSS has been a long-standing and popular way to analyze data. SPSS does have a syntax mode that allows the researcher to share their analysis protocol, but that mode of interacting with the program is much less common than the GUI version. Furthermore, SPSS is proprietary. If you don’t have a license, you can’t easily look at how the analyses were done. The same is true of data manipulation in Matlab. My university has a license, but if I want to share my data analysis with a private company, they may not have a license. But anyone in the world can install and use R and Python.

Conclusion

Science isn’t a matter of belief. Science works when people trust in the methodology, the data and interpretation, and by extension, the results. In my view, Open Science is one of the best ways to encourage scientific trust and to encourage knowledge organization and synthesis.

How do you plan to use your PhD?

If you follow my blog or medium account, you’ve probably already read some of my thoughts and musings on the topic of running a research lab, training graduate students, and being a mentor. I think I wrote about that just a few weeks ago. But if you haven’t read any of my previous essays, let me provide some context. I’m professor of Psychology at a large research university in Canada, the University of Western Ontario. Although we’re seen as a top choice for undergraduates because of our excellent teaching and student life, we also train physicians, engineers, lawyers, and PhD students in dozens of field. My research group fits within the larger area of Cognitive Neuroscience which is one of our university’s strengths.

Within our large group (Psychology, the Brain and Mind institute, BrainsCAN, and other groups) we have some of the very best graduate students and postdocs in the world, not to mention some of my excellent faculty colleges. I’m not writing any of this to brag or boast but rather to give the context that we’re a good place to be studying cognition, psychology and neuroscience.

And I’m not sure any of our graduates will ever get jobs as university professors.

The Current State of Affairs

Gordon Pennycook, from Waterloo and soon from University of Regina wrote an excellent blog post and paper on the job market for cognitive psychology professors in Canada. You might think this is too specialized, but he makes the case that we can probably extrapolate to other fields and counties and find the same thing. But since this is my field (and Gordon’s also) it’s easy to see how this affects students in my lab and in my program.

One thing he noted is that the average Canadian tenure-track hire now has 15 publications on their CV when hired. That’s a long CV and as long as long as what I submitted in my tenure dossier in 2008. It’s certainly a longer CV than what I had when I was hired at Western in 2003. I was hired with 7 publications (two first author) after three years as a postdoc and three years of academic job applications. And it’s certainly longer than what the most eminent cognitive psychologists had when they were hired. Michael Posner, whose work I cite to this day, was hired straight from Wisconsin with one paper. John Anderson, who’s work I admire more than any other cognitive scientists, was hired at Yale with a PhD from Stanford and 5 papers on his CV. Nancy Kanwisher was hired in 1987 with 3 papers from her PhD at UCLA.

Compare that to a recent hire in my own group, who was hired with 17 publications in great journals and was a postdoc for 5 years. Or compare that to most of our recent hires and short-listed applicants who have completed a second postdoc before they were hired.  Even our postdoctoral applicants, people applying for 2-3 year postdocs at my institution, are already postdocs and are looking to get a better postdoc to get more training and become more competitive.

So it’s really a different environment today.

The fact is, you will not get a job as a professor after finishing a PhD. Not in this field and not in most fields. Why do I say this? Well for one, it’s not possible to publish 15-17 papers during your PhD career. Not in my lab, at least. Even if added every student to every paper I published, they will not have a CV with that many papers, I simply can’t publish that many papers and keep everything straight. And I can’t really put every student on every paper anyway. If the PhD is not adequate for getting a job as a professor, what does that mean for our students, our program, and for PhD programs in general?

Expectation mismatch

Most students enter a PhD program with the idea of becoming a professor. I know this because I used to be the director of our program and that’s what nearly every student says, unless they are applying to our clinical program with the goal of being a clinician. If students are seeking a PhD to become a professor, but we can clearly see that the PhD is not sufficient, then students’ expectations are not being met by our program. We admit student to the PhD with most hoping to become university professors and then they slowly learn that it’s not possible. Our PhD is, in this scenario, merely an entry into the ever-lengthening postdoc stream which is where you prepare to be a professor. We don’t have well-thought out alternatives for any other stream.

But we can start.

Here’s my proposal

  1. We have to level with students and applicants right away that “tenure track university professor” is not going to be the end game for PhD. Even the very best students will be looking at 1-2 postdocs before they are ready for that. For academic careers, the PhD is training for the postdoc in the same way that med school is training for residency and fellowship.
  2. We need to encourage students to begin thinking about non-academic careers in their first year. This means encouraging students’ ownership of their career planning.  There are top-notch partnership programs like Mitacs and OCE (these are Canadian but programs like this exist in the US, EU and UK) that help students transition into corporate and industrial careers. We have university programs as well. And we can encourage students to look at certificate program store ensure that their skills match the market. But students won’t always know about these things if their advisors don’t know or care.
  3. We need to emphasize and cultivate a supportive atmosphere. Be open and honest with students about these things and encourage them to be open as well. Students should be encouraged to explore non-academic careers and not make to feel guilty for “quitting academia”.

I’m trying to manage these things in my own lab. It is not always easy because I was trained to all but expect that the PhD would lead into a job as a professor. That was not really true when I was a student but it’s even less true now. But I have to to adapt. Our students and trainees have to adapts and it’s incumbent upon us to guide and advice.

I’d be intersted in feedback on this topic.

  • Are you working on a PhD to become a professor?
  • Are you a professor wondering if you’d be able to actually get a job today?
  • Are you training students with an eye toward technical and industrial careers?

 

The Professor, the PI, and the Manager

Here’s a question that I often ask myself: How much should I be managing my lab?

I was meeting with one of my trainees the other day and this grad student mentioned that they sometimes feel like they don’t know what to do during the work day and that they sometimes feel like they are wasting a lot of their time. As a result, this student will end up going home and maybe working on a coding class, or (more often) doing non grad school things. We talked about what this student is doing and I agreed: they are wasting a lot of time, and not really working very effectively.

Before I go on, some background…

There is no shortage of direction in my lab, or at least I don’t think so. I think I have a lot of things in place. Here’s a sample:

  • I have a detailed lab manual that all my trainees have access to. I’ve sent this document to my lab members a few times, and it covers a whole range of topics about how I’d like my lab group to work.
  • We meet as a lab 2 times a week. One day is to present literature (journal club) and the other day is to discuss the current research in the lab. There are readings to prepare, discussions to lead, and I expect everyone to contribute.
  • I meet with each trainee, one-on-one, at least every other week, and we go though what each student is working on.
  • We have an active lab Slack team, every project has a channel.
  • We have a project management Google sheet with deadlines and tasks that everyone can edit, add things to, see what’s been done and what hasn’t been done.

So there is always stuff to do but I also try not to be a micromanager of my trainees. I generally assume that students will want to be learning and developing their scientific skill set. This student is someone who has been pretty set of looking for work outside of academics, and I’m a big champion of that. I am a champion of helping any of my trainees find a good path. But despite all the project management and meetings this student was feeling lost and never sure what to work on. And so they were feeling like grad school has nothing to offer in the realm of skill development for this career direction. Are my other trainees also feeling the same way?

Too much or too little?

I was kind of surprised to hear one of my students say that they don’t know what to work on, because I have been working harder than ever to make sure my lab is well structured. We’ve even dedicated several lab meetings to the topic.

The student asked what I work on during the day, and it occurred to me that I don’t always discuss my daily routine. So we met for over an hour and I showed this student what I’d been working on for the past week: an R-notebook that will accompany a manuscript I’m writing that will allow for all the analysis of an experiment to be open and transparent. We talked about how much time that’s been taking, how I spent 1-2 days optimizing the R code for a computational model. How this code will then need clear documentation. How the OSF page will also need folders for the data files, stimuli, the experimenter instructions. And how those need to be uploaded. I have been spending dozens of hours on this one small part of one component of one project within one of the several research areas in my lab, and there’s so much more to do.

Why aren’t my trainees doing the same? Why aren’t they seeing this, despite all the project management I’ve been doing?

I want to be clear, I am not trying to be critical in any way of any of my trainees. I’m not singling anyone out. They are good students, and it’s literally my job to guide and advise them. So I’m left with the feeling that they are feeling unguided, with the perception that that there’s not much to do. If I’m supposed to be the guide and they are feeling unguided, this seems like a problem with my guidance.

What can I do to help motivate?

What can I do to help them organize, feel motivated, and productive?

I expect some independence for PhD students, but am I giving them too much? I wonder if my lab would be a better training experience if I were just a bit more of a manager.

  • Should I require students to be in the lab every day?
  • Should I expect daily summaries?
  • Should I require more daily evidence that they are making progress?
  • Am I sabotaging my efforts to cultivate independence by letting them be independent?
  • Would my students be better off if I assumed more of a top down, managerial role?

I don’t know the answers to these questions. But I know that there’s a problem. I don’t want to be a boss, expecting them to punch the clock, but I also don’t want them to float without purpose.

I’d appreciate input from other PIs. How much independence is too much? Do you find that your grad students are struggling to know what to do?

If you have something to say about this, let me know in the comments.

Dealing with Failure

When the hits come, they really come hard.

I’m dealing with some significant personal/professional failures this month.

I put in for two federal operating grants this past year: one from NSERC to fund my basic cognitive science work on learning and memory and one from SSHRC to fund some relatively new research on mindfulness meditation. I worked pretty hard on these last fall.

And today I found out that neither were funded.

IMG_20180728_204415-EFFECTS

Port Stanley Beach, ON 2018

This means that for the first time in a long number of years, my lab does not have an active federal research grant. The renewal application from NSERC is particularly hard to swallow, since I’ve held multiple NSERC grants and they have a pretty high funding rate relative to other programs. I feel like the rug was pulled out from under me and worry about how to support the graduate students in my lab. I can still carry on doing good research this coming year, and I have some residual funds, but I won’t lie: this is very disappointing.

The cruelest month, the cruelest profession.

It’s often said that academic/scientific work loads heavily on dealing with failure. It’s true. I’ve had failed grants before. Rejected manuscripts. Experiments that I thought were interesting or good that fell apart with additional scrutiny. For every success, there are multiple failures. And that’s all just part of being a successful academic. Beyond that, many academics may work 6-8 years to get a PhD, do a post doc, and find themselves being rejected from one job after another. Other academics struggle with being on the tenure track and may fail to achieve that milestone.

And April really truly is the cruelest month in academics.  Students may have to deal with: rejection from grad school, med school, graduate scholarships, job applications, internships, residency programs. They worry about their final exams. Faculty worry about rejection from grants, looking for jobs, and a whole host of other things. (and at least here in Canada, we still have snow in the forecast…)

Why am I writing this?

Well, why not? I’m not going to hide these failures in shame. Or try to blame someone else. I have to look these failures in the eye, own them, take responsibility for them, and keep working. Part of that means taking the time to work through my emotions and feelings about this. That’s why I’m writing this.

I’m also writing, I guess, to say that it’s worth keeping in mind that we all deal with some kind of stress or anxiety or rejection. Even people who seem to have it together (like me, I probably seem like I have it together: recently promoted to Full Professor, respectable research output, I’ve won several teaching awards, written a textbook, and have been a kind and decent teacher and mentor to 100s of students)…we all get hits. But really, I’m doing fine. I’m still lucky. I’m still privileged. I know that others will be hurting more than I am. I have no intention to wallow in pity or fight with rage. I’m not going to stop working. Not going to stop writing, doing research or trying to improve as a teacher. Moving forward is the only way I can move.

Moving on

We all fail. The question is: What are you going to do about it?

From a personal standpoint, I’m not going to let this get me down. I’ve been in this boat before. I have several projects that are now beginning to bear fruit. I’ve had a terrific insights about some new collaborative work. I have a supportive department and I’m senior enough to weather quite a lot. (thought I’m not Job, so you don’t have to test me Lord!)

From a professional standpoint, though, I think I know what the problems were and I don’t even need to see the grant reviews or committee comments (though I will be looking at them soon). There’s only one of me and branching off into a new direction three years ago to pursue some new ideas took time away from my core program, and I think both suffered a bit as a result. That happens, and I can learn from that experience.

I’ll have to meet with my research team and students next week and give them the bad news. We’re going to need to probably have some difficult conversations about working through this, and I know this will hit some of them hard too.

It might also mean some scholarly pruning. It might mean turning off a few ideas to focus more on the basic cognitive science that’s most important to me.

Congratulations to everyone who got good news this month. Successful grants, acceptance into med school, hired, or published. Success was earned. And for those of us getting bad news: accept it, deal with it, and progress.

Now enjoy the weekend everyone.

 

Artificially Intelligent—At the Intersection of Bots, Equity, and Innovation

This article was written in collaboration with my wife Elizabeth. We wrote this together and the ideas were generated during some of the great discussions we had during our evening 5k runs.

We all remember Prime Minister Trudeau’s famous response when asked about his gender equity promise for filling roles in the cabinet: “because it’s 2015.” And really, this call to action comes quite late in the historical span of modernity, but we’re glad someone at the highest levels of government in a developed nation has strongly proclaimed it. Most of us in Canada and likely around the world, were pleased to see Trudeau had staffed his cabinet with a significant amount of female leaders in important decision-making roles. And now, it’s 2017–a year that has been pivotal to say the least. Last Spring, Canada’s Minister of Science, Dr. Kirsty Duncan announced that universities in Canada are now required to improve their processes for hiring Canada Research Chairs and ensure those practices and review plans  are equitable, diverse and inclusive. The government of Canada’s announcement is a call to action to include more women and other underrepresented groups at these levels, and it’s essentially come down to ultimatum: research universities will simply not receive federal funding allocations for these programs unless they take equity, diversity, and inclusion seriously in their recruitment and review processes.

IMG_20161205_072820602When placed under the spotlight, the situation is a national embarrassment. Currently there is one woman Canada Excellence in Research Chair in this country and for women entrepreneurs the statistics are not much better. Women innovators in the industrial or entrepreneurial sphere are often left hanging without a financial net, largely as a result of a lack of overall support in business environments and major gaps in policy and funding. The good news is that change is happening now, and it’s affecting policies and practices at basic funding and policy levels. Federal and Provincial research granting agencies in Canada are actively responding to the call for more equitable and inclusive review practices within the majority their programs. The message is clear from the current Canadian government: get on board with your EDI policies and practices, or your boat won’t leave the harbour. But there’s always more work to be done.

The Robot Revolution

Combined with our pivotal political moment in history and on-going necessity for a level playing field for underrepresented groups, humans are situated at a crossroads of theory and praxis of human-machine interaction. The current intersection of human and machine certainly has critical implications for the academy, innovation, and our workplaces. It exposes the gaps to see what is possible, and we know the tools are here and must be harnessed for change. Even though we are literally living through mini “revolutions” each day as new technologies, platforms and code stream before our very eyes, humanity has been standing at this major intersection for a couple of centuries or more–at the very least, since the advent of non-human technologies that help humans process information and communicate ideas (cave paintings, the book, the typewriter, Herb Simon’s General Problem Solver). The human-AI link we need to critically assess now however, is how this convergence of the human-machine can work for women and underrepresented groups in the academy and entrepreneurial sectors in powerful ways. When it comes to creating more equitable spaces and providing women with the pay they deserve, we need to move beyond gloomy statements like “the robots are taking our jobs.” We must seek to understand how underrepresented and underpaid people can benefit from robots rather than running from them. And we must seek to understand why women in the academy, industry and other sectors haven’t been using the AI tools in dynamic ways all along. [Some are of course. As evidenced here. Two women business owners harnessed the power of technology to grow their client and customer base by sending emails from a fictional business partner named “Keith.” Client response to “Keith” seemed to do the trick in getting their customers and backers to take them seriously.]

Implicit Bias

In the psychology of decision making, a bias is usually defined as tendency to make decisions in a particular way. In many cases, the bias can he helpful and adaptive: we all have a bias to avoid painful situations. In other cases the bias can lead us to ignore information that would result in a better decision. An implicit bias refers to a bias that we are unaware of or the unconscious application of a bias that we are aware of. The construct has been investigated in how people apply stereotypes. For example, if you instinctively cross the street to avoid walking past a person of a different race or ethnic group, you are letting an implicit bias direct your behaviours. If you instinctively tend to doubt that a woman who takes a sick day is really sick, but tend to believe the same of a man, you are letting an implicit bias direct your behaviours. Implicit bias has been shown to also affect hiring decisions, teaching evaluations. Grants that are submitted by women scientists often receive lower scores and implicit bias is the most likely culprit.  Implicit bias is difficult to avoid because it is implicit. The effect occurs without us being aware of it happening. We can overcome these biases if we are able to be more aware that they are happening. But AI also offers a possible way to overcome these biases as well.

An Engine for Equity at Work

AI and fast-evolving technologies can and should be used by women right now. We need to understand how they can be harnessed to create balanced workplaces, generate opportunity in business, and improve how we make decisions that directly affect women’s advancement and recognition in the academe. What promise or usefulness do AI tools hold for the development of balanced and inclusive forms of governance, review panel practices, opportunities for career advancement and recognition, and funding for start-ups? How can we use the power of these potent and disruptive technologies to improve processes and structures in the academy and elsewhere to make them more equitable and inclusive of all voices? There’s no denying that the tech space is changing things rapidly, but what is most useful to us now for correcting or improving imbalances or fixing inequitable, crumbling, and un-useful patriarchal structures. We need a map to navigate the  intersection of rapid tech development and human-machine interaction and use AI effectively to reduce cognitive and unconscious biases in our decision-making; to improve the way we conduct and promote academic research, innovation and governance for women and underrepresented groups of people.

nastuh-abootalebi-284883

Some forward thinking companies are using the approach now. For example, several startups are using AI to prescreen candidates for possible interviews. In one case, the software (Talent Sonar) structured interviews and extracts candidate qualifications and removes candidate’s names and gender information from the report. These algorithms are designed to help remove implicit bias in hiring by focusing on the candidate’s attributes and workplace competencies without any reference to gender. Companies relying on these kinds of AI algorithms report a notable increase in hiring women. Artificial Intelligence, far from replacing workers, is actually helping to diversify and improve the modern workforce.

Academics have seen this change coming. Donna Haraway, in her Cyborg Manifesto re-conceptualizes modern feminist theory through a radical critique of the relationship between biology, gender, and cybernetics. For Haraway, a focus on the cybernetic–or the artificially intelligent–removes the reliance on gender in changing the way we think about power and how we make decisions about what a person has achieved, or is capable of doing. Can we, for example, start to aggressively incorporate AI methods for removing implicit or explicit bias from grant review panels–or more radically, remove humans from the process entirely? When governing boards place their votes for who will sit on the next Board of Trustees, or when university review committees adjudicate a female colleague’s tenure file in the academy, could this not be done via AI mechanisms or with an application that eliminates gender and uses keyword recognition for assessing the criteria? When we use AI to improve our decision making, we also have the ability to make it more equitable, diverse and inclusive. We can remove implicit or explicit cognitive biases based on gender or orientation, for example, when we are deciding who will be included in the next prestigious cohort of Canada Research Chairs.

AI can, and will continue to change the way human work is recognized in progressive ways: recognition of alternative work during parental leaves, improved governance and funding models, construction of equitable budgets and policy, and enhanced support for women entrepreneurs and innovators. AI is genderless. It is non-hierarchical. It has the power to be tossed like a dynamite stick to disrupt ancient academic structures that inherently favour patriarchal models for advancing up the tenure track. Equalization via AI gives women and underrepresented groups the power to be fully recognized and supported, from the seeds of their innovation (the academy) to the mobilization of those ideas in entrepreneurial spaces. The  robots are in fact still working for us–at least, for now.

Does This Project Bring Me Joy?

 

I have too many research projects going on.

It’s great to be busy, but I’m often overwhelmed in this area. As a university professor, some of my job is well defined (e.g. teaching) but other parts not so much. My workload is divided into 40% research, 40% teaching, and 20% service. Within each of these, I have some say as to what I can take on. I can teach different classes and volunteer to serve on various committees. But the research component is mine. This is what I really do. I set the agenda. I apply for funding. This is supposed to be my passion.

So why do I feel overwhelmed in that area?

I think I have too many projects going on. And I don’t mean that I am writing too many papers. I’m most certainly not doing that. I mean I have too many different kinds of projects. There are several projects on psychology and aging, projects on the brain electrophysiology and category learning, a project on meditation and wellbeing in lawyers, a project on patient compliance, a project on distraction from smartphones, plus 4-5 other ideas in development, and at least 10 projects that are most charitably described as “half baked ideas that I had on the way home from a hockey game”.

Add to this many projects with students that may not quite be in my wheelhouse, but are close and that I’m supervising. And I’ll admit, I have difficulty keeping these things straight. I’m interested in things. But when I look at the list of things, I confess I have a tough time seeing a theme sometimes. And that’s a problem as it means I’m not really fully immersed in any one project. I cease to be an independent and curious scientist and become a mediocre project manager. And when I look at my work objectively, more often than not, it seems mediocre.

Put another way, sometimes I’m always really sure what I do anymore…

So what should I do about this, other than complain on my blog? I have to tidy up my research.

A Research Purge

There is a very popular book called “The Life Changing Magic of Tidying Up“. I have not read this book, but I have read about this book (and let’s be honest that’s sometimes the best we can do). The essence of the approach is that you should not be hanging on to things that are not bringing you joy.

Nostalgia is not joy.

Lots of stuff getting in the way is not joy. And so you go though things, one category at a time, and look at each thing and say “does this item spark joy“? If the answer is no, you discard it. I like this idea.

If this works for a home or a room…physical space…then it should work for the mental space of my research projects. So I’m going to try this. I thought about this last year, but never quite implemented it. I should go through each project and each sub project and ask “Does this project bring me joy?” or “Is there joy in trying to discover this?” Honestly, if the answer is “no” or “maybe” why should I work on it? This may mean that I give up on some things and that some possible papers will not get published. That’s OK, because I will not be compelled to carry out research and writing if it is not bringing me joy. Why should I? I suspect I would be more effective as a scientist because I will (hopefully) focus my efforts on several core areas.

This means, of course, that I have to decide what I do like. And it does not have to be what I’m doing. It does not have to be what I’ve done.

The Psychology of the Reset

Why do we like this? Why do people want to cleanse? To reset. To get back to basics? It seems to be the top theme in so many pop-psych and self help books. Getting rid of things. A detox or a “digital” detox. Starting over. Getting back to something. I really wonder about this. And although I wonder why we behave this way, I’m not sure that I would not find joy in carrying out a research study on this…I must resist the urge to start another project.

I’m going to pare down. I still need to teach, and supervise, and serve on editorial boards, etc: that’s work. I’m not complaining and I like the work. But I want to spend my research and writing time working on projects that will spark joy. Investigating and discovering things that I’m genuinely curious about…curious enough to put in the hours and time to do the research well.

I’d be curious too, to know if others have tried this. Has it worked? Have you become a better scholar and scientists by decluttering your research space?

Thanks for reading and comments are welcome.