Tag Archives: Education

The Scientific Workflow

Updated Sept 2, 2019

When new trainees enter into your lab, do you have a plan or a guide for them? I have a lab manual that explains roles and responsibilities, but I did not (until recently) have a guide for how we do things. I wrote this to help my own trainees after a lab meeting last week where we discussed ideas around managing our projects. It started as a simple list, and I’m now making it part of my lab manual. You can find a formatted version here, and the LaTex files here

Introduction

This is my guide for carrying out cognitive psychology and cognitive science research in my lab. The workflow is specific to my lab, but can be adapted. If you think this is helpful, please feel free to share and adapt for your own use. You can keep this workflow in mind when you are planning, conducting, analyzing, and interpreting scientific work. You may notice two themes that seem to run throughout the plan: documenting and sharing. That’s the take home message: Document everything you do and share your work for feedback (with the group, your peers, the field, and the public). Not every project will follow this outline, but most will.

Theory & Reading

The first step is theory development and understanding the relationship of your work to the relevant literature. We’re involved in cognitive science and we develop and test theories about how the mind forms concepts and categories. We usually work from two primary theories. Prototype / exemplar theory both of which deal with category representations and the multiple systems theory (COVIS is an example) which addresses the category learning process and rule use. You should keep up with developments in the field using Google Scholar alerts and its recommendations. I check every week and I recommend that you do as well. We want to test the assumptions of these theories, understand what they predict, test their limitations and contrast with alternative accounts. We’re going to design experiments that help understand the theory, the models, and make refinements and/or reject some aspects of our theories.

  • Use Google Scholar to find updates that are important for your research.
  • Save papers in Paperpile and annotate as needed.
  • Document your work in Google Docs (or another note taking app).
  • Share interesting papers and preprints with the whole lab group in the relevant channel(s) in Slack.

Hypotheses Generation

Hypotheses are generated to test assumptions and aspects of the theory and to test predictions of other theories. The hypothesis is a formal statement of something that can be tested experimentally and these often arise from more general research questions which are often broad statements about what you are interested in or trying to discover. You might arrive at a research question or an idea while reading a paper, at a conference, while thinking about an observation you made, or by brainstorming in an informal group or lab meeting. Notice that all of these assume you that put in some time and effort to understanding the theory and then allow some time to work over ideas in your mind, on paper, or in a computer simulation.

  • Work on hypothesis generation in lab meetings, our advisory meetings, and on your own.
  • Document your thoughts in Google Docs (or your own notes on paper, OneNote or Evernote).
  • Share insights in lab meetings and in the relevant channel in Slack.

Design the Study/Experiment

Concurrent with hypothesis generation is experimental design. In most case, we are designing experiments to test hypotheses about category representation and category learning and/or the predictions of computational models. We want to test hypothesis generated from theories and also carry out exploratory work to help refine our theories. Avoid the temptation to put the cart before the horse and come up with experiments and studies that will produce an effect for its own sake. We don’t just want to generate effects.

The design comes first. Consider the logic of your experiment, what you plan to manipulate, and what you want to measure. Avoid the temptation to add in more measures than you need, just to see if there’s an effect. For example, do you need to add in 2-3 measures of working memory, mood, or some demographic information just to see if there’s an effect there? If it’s not fully justified, it may hurt more than help because you have non-theoretically driven measures to contend with. I’ve been guilty of this in the past but it always comes back to haunt me.

  • Work on experiment generation in lab meetings, advisory meetings, on your own.
  • Document your work and ideas in Google Docs or a note taking app that you can share.
  • Use G*Power to estimate correct sample size.
  • Use PsychoPy or Qualtrics to build your experiment.
  • Test these experiment protocols often.
  • Develop a script for research assistants who will be helping you carry out the study.
  • Share insights in lab meetings and in the relevant channel in Slack.
  • Organize tasks and chores in the relevant Trello board for your project.

Analysis Plan & Ethics Protocol

This is where we start to formalize things. An analysis plan will link together the hypothesis and the experimental design with the dependent variables and/outcome measures. In this plan, we’ll describe and document how the data will be collected, visualized, analyzed, stored and shared. This plan should describe how we will deal with outlier data, missing data, data from participants who did not complete the experiment correctly, experimenter error, malfunction, etc. This plan can include tentative predictions derived from a model and also a justification of how we intend to analyze and interpret the data. This plan can (and probably should) be pre-registered with OSF, which is where we’ll plan to share the data we collect with the scientific community.

At the same time we also want to write an ethics protocol. This is a description of our experiment, the research question, and procedures for the University REB. This will also include standardized forms for information and consent, a policy for recruitment, subject safety, data storage and security. The REB has templates and examples, and our lab Slack channel on ethics can include examples as well. Use templates when ever possible.

Both of these documents, the analysis plan and the ethics protocol, should describe exactly what we are doing and why we are doing it. They should provide enough information that someone else would be able to reproduce our experiments in their own lab. These documents will also provide an outline for your eventual method section and your results section.

  • Document your analysis plan and ethics protocol work in Google Docs.
  • Link these documents to the project sheet or Trello board for your project.
  • Share in the relevant channel in Slack.

Collect Data

Once the experiment is designed, the stimuli have been examined, we’re ready to collect data or to obtain data from a third party (which might be appropriate for model testing). Before you run your first subject, however, there are some things to consider. Take some time to run yourself through every condition several times and ask other lab members to do the same. You can use this to make sure things are working exactly as you intend, to make sure the data are being saved on the computer, and to make sure the experiment takes as long as planned.

When you are ready to collect data for your experiment:

  • Meet with all of your research volunteers to go over the procedure.
  • Book the experiment rooms on the Google Calendar.
  • Reserve a laptop or laptops on the Google Calendar.
  • Recruit participants though SONA or flyers.
  • Prepare the study for M-Turk or Prolific
  • Use our lab email for recruitment.

After you have run through your experiment several time, documented all the steps, and ensured that the everything is working exactly as you intended, you are ready to begin. While you are running your experiment:

  • Document the study in Google Docs, Trello, and/or Slack (as appropriate)
  • Make a note of anything unusual or out of the ordinary for every participant in a behavioural study.
  • Collect signatures from participants if you are paying them.
  • Data should stored in text files that can be opened with Excel or Google sheets or imported directly into R. Be sure these are linked to the project sheet.
  • Make sure the raw data are labelled consistently and are never altered.
  • Be sure to follow the data storage procedures outlined in the ethics protocol.

Data Management

Your data plan should specify where and how to store your data. While you are collecting data you should be working on a script in R (or Python) to extract and summarize the raw data according to your plan. When you reach the planned sample size, ensure that all of that data are secure and backed up and do an initial summary with your R script.

As you work on summarizing and managing your data:

  • Make notes in the project sheet and/or Trello board about where the data are stored
  • Document your steps in an R Notebook (or Python Notebook).

Plots & Stats

Remember the photo of Dr. Katie Bouman, then a postdoc, when she first saw the rendering of the first photos of a black hole that her algorithms generated? That’s the best part of science: seeing your data visualized for the first time. When you have completed your experiment and taken care of the data storage and basic processing, it’s time to have fun and see what you discovered. The analysis plan is your guide and your analysis plan describes how you want to analyze the data, what your dependent variables are, and how to conduct statistical test with you data to test the hypothesis. But before you do any statistics, work on visualizing the data. Use your R notebook to document everything and generate boxplots, scatter plots, or violin plots to see the means, medians, and the distribution for the data.

Because you are using R Notebooks to do the analysis, you can write detailed descriptions of how you created the plot, what the plot is showing, and how we should interpret the plot. If you need to drop or eliminate a subject’s data for any reason, exclude them in from data set in R, do not delete the data from the raw data file. Make a comment in the script of which subject was dropped and why. This will be clear and transparent.

You can also use R to conduct the tests that we proposed to use in the analysis plan. This might be straightforward ANOVA or t-test, LME models, regression, etc. Follow the plan you wrote, and if you deviate from the plan, justify and document that exploratory analysis.

If you are fitting a decision boundary model to your data, make sure you have the code for the model (these will be on my GitHub) and you should do your modeling separately from the behavioural analysis. The GLM models are saved as R scripts but you should copy or fork to your R-Notebooks for your analysis so you can document what you did. Make sure that you develop the version for your experiment and that the generic model is not modified.

If you are fitting a prototype or exemplar model, these have been coded in Python. Use Python 3 and a basic text editor or JupyterLab. JupyterLab might be better as it’s able to generate markdown and reproducible code like R Notebooks.

  • Follow your analysis plan.
  • Consult with me or your peers if you notice any unusual patterns with anything.
  • Make notes in the project sheet and/or Trello board about what analyses you’ve completed.
  • Document your steps in an R Notebook (or Python Notebook).
  • If you drop a participant for any reason, indicate in the comments of your R script (or other notes). We want this information to recorded and transparent.

Present and Explain Your Work

While you working on your analysis, you should present the interim work often in lab meetings for the rest of the group and we can discuss the work when we meet individually. The reason to present and discuss often is to keep the ideas and work fresh in your mind by reviewing manageable pieces of it. If you try to do too much at once, you may miss something or forget to document a step. Go over your work, make sure its documented, and then work on the new analyses, and repeat. You should be familiar with your data and your analysis so that you can explain it to yourself, to me, to your peers, end eventually anyone who reads your paper.

Use the following guidelines for developing your work:

  • Make your best plots and figures.
  • Present these to the lab on a regular basis.
  • Use RPubs to share summary work instantly.
  • Keep improving the analysis after each iteration.
  • You should always have 8-10 slides that you can present to the group.
  • Document your work in R Notebooks, Google Docs, Trello, and Google Slides.

Write Papers Around This Workflow

The final step is to write a paper that describes your research question, your experimental design, your analysis, and your interpretation of what the analysis means. A scientific paper, in my opinion has two important features:

  1. The paper should be clear and complete. That means it describes exactly what you wanted to find out, how and why you designed your experiment, how you collected your data, how you analyzed your data, what you discovered, and what that means. Clear and complete also means that it can be used by you or by others to reproduce your experiments.
  2. The paper should be interesting. A scientific paper should be interesting to read. It needs to connect to a testable theory, some problem in the literature, an unexplained observation. It is just as long as it needs to be.

I think the best way to generate a good paper is to make good figures. Try to tell the story of your theory, experiment, and results with figures. The paper is really just writing how you made the figures. You might have a theory or model that you can use a figure to explain. You can create clear figures for the experimental design, the task, and the stimuli. Your data figures, that you made according to you analysis plan, will frame the results section and a lot of what you write is telling the reader what they show, how you made them, and what they mean figures. Writing a scientific paper is writing a narrative for your figures.

Good writing requires good thinking and good planning. But if you’ve been working on your experiment according to this plan, you’ve already done a lot of the thinking and planning work that you need to do to write things out. You’ve already made notes about the literature and prior work for your introduction. You have notes from your experimental design phase to frame the experiment. You have an ethics protocol for your methods section and an analysis plan for your results. You’ll need to write the discussion section after you understand the results, but if you’ve been presenting your 8-10 slides in lab meeting and talking about them you will have some good ideas and the writing should flow. Finally, if you’ve been keeping track of the papers in Paperpile, your reference section should be easy.

Submit the paper

The final paper may have several experiments, each around the theme set out in the introduction. It’s a record of what we did, why we did it, and how. The peer reviewed journal article is the final stage, but before we submit the paper we have a few other steps to ensure that our work roughly conforms to the principles of Open Science, each of which should be straightforward if we’ve followed this plan.

  • Create a publication quality preprint using the lab template. We’ll host this on PsyArXiv (unless submitting a blind ms.)
  • Create a file for all the stimuli or materials that we used and upload to OSF.
  • Create a data archive with all the raw, de-identified data and upload to OSF.
  • Upload a clean version of your R Notebook that describe your analyses and upload to OSF.

The final steps are organized around the requirements of each journal. Depending on where we decide to submit our paper, some of these may change. Some journals will insist on a word doc file, others will allow for PDF. In both cases, assume that the Google Doc is the real version, and the PDF or the .doc files are just for the journal submission. Common steps include:

  • Download the Google Doc as a MS Word Doc or PDF.
  • Create a blind manuscript if required.
  • Embed the figures if possible otherwise place at the end.
  • Write a cover letter that summarizes paper and why we are submitting.
  • Identify possible reviewers.
  • Write additional summaries as required and generate keywords.
  • Check and verify the names, affiliations, and contact information for all authors.
  • Submit and wait for 8-12 weeks!

Conclusion

As I mentioned at the outset, this might not work for every lab or every project. But the take home message–document everything you do and share your work for feedback–should resonate with most science and scholarship. Is it necessary to have a formal guide? Maybe not, though I found it instructive for me as the PI to write this all down. Many of these practices were already in place, but not really formalized. Do you have a similar document or plan for your lab? I’d be happy to hear in the comments below.

How do you plan to use your PhD?

If you follow my blog or medium account, you’ve probably already read some of my thoughts and musings on the topic of running a research lab, training graduate students, and being a mentor. I think I wrote about that just a few weeks ago. But if you haven’t read any of my previous essays, let me provide some context. I’m professor of Psychology at a large research university in Canada, the University of Western Ontario. Although we’re seen as a top choice for undergraduates because of our excellent teaching and student life, we also train physicians, engineers, lawyers, and PhD students in dozens of field. My research group fits within the larger area of Cognitive Neuroscience which is one of our university’s strengths.

Within our large group (Psychology, the Brain and Mind institute, BrainsCAN, and other groups) we have some of the very best graduate students and postdocs in the world, not to mention some of my excellent faculty colleges. I’m not writing any of this to brag or boast but rather to give the context that we’re a good place to be studying cognition, psychology and neuroscience.

And I’m not sure any of our graduates will ever get jobs as university professors.

The Current State of Affairs

Gordon Pennycook, from Waterloo and soon from University of Regina wrote an excellent blog post and paper on the job market for cognitive psychology professors in Canada. You might think this is too specialized, but he makes the case that we can probably extrapolate to other fields and counties and find the same thing. But since this is my field (and Gordon’s also) it’s easy to see how this affects students in my lab and in my program.

One thing he noted is that the average Canadian tenure-track hire now has 15 publications on their CV when hired. That’s a long CV and as long as long as what I submitted in my tenure dossier in 2008. It’s certainly a longer CV than what I had when I was hired at Western in 2003. I was hired with 7 publications (two first author) after three years as a postdoc and three years of academic job applications. And it’s certainly longer than what the most eminent cognitive psychologists had when they were hired. Michael Posner, whose work I cite to this day, was hired straight from Wisconsin with one paper. John Anderson, who’s work I admire more than any other cognitive scientists, was hired at Yale with a PhD from Stanford and 5 papers on his CV. Nancy Kanwisher was hired in 1987 with 3 papers from her PhD at UCLA.

Compare that to a recent hire in my own group, who was hired with 17 publications in great journals and was a postdoc for 5 years. Or compare that to most of our recent hires and short-listed applicants who have completed a second postdoc before they were hired.  Even our postdoctoral applicants, people applying for 2-3 year postdocs at my institution, are already postdocs and are looking to get a better postdoc to get more training and become more competitive.

So it’s really a different environment today.

The fact is, you will not get a job as a professor after finishing a PhD. Not in this field and not in most fields. Why do I say this? Well for one, it’s not possible to publish 15-17 papers during your PhD career. Not in my lab, at least. Even if added every student to every paper I published, they will not have a CV with that many papers, I simply can’t publish that many papers and keep everything straight. And I can’t really put every student on every paper anyway. If the PhD is not adequate for getting a job as a professor, what does that mean for our students, our program, and for PhD programs in general?

Expectation mismatch

Most students enter a PhD program with the idea of becoming a professor. I know this because I used to be the director of our program and that’s what nearly every student says, unless they are applying to our clinical program with the goal of being a clinician. If students are seeking a PhD to become a professor, but we can clearly see that the PhD is not sufficient, then students’ expectations are not being met by our program. We admit student to the PhD with most hoping to become university professors and then they slowly learn that it’s not possible. Our PhD is, in this scenario, merely an entry into the ever-lengthening postdoc stream which is where you prepare to be a professor. We don’t have well-thought out alternatives for any other stream.

But we can start.

Here’s my proposal

  1. We have to level with students and applicants right away that “tenure track university professor” is not going to be the end game for PhD. Even the very best students will be looking at 1-2 postdocs before they are ready for that. For academic careers, the PhD is training for the postdoc in the same way that med school is training for residency and fellowship.
  2. We need to encourage students to begin thinking about non-academic careers in their first year. This means encouraging students’ ownership of their career planning.  There are top-notch partnership programs like Mitacs and OCE (these are Canadian but programs like this exist in the US, EU and UK) that help students transition into corporate and industrial careers. We have university programs as well. And we can encourage students to look at certificate program store ensure that their skills match the market. But students won’t always know about these things if their advisors don’t know or care.
  3. We need to emphasize and cultivate a supportive atmosphere. Be open and honest with students about these things and encourage them to be open as well. Students should be encouraged to explore non-academic careers and not make to feel guilty for “quitting academia”.

I’m trying to manage these things in my own lab. It is not always easy because I was trained to all but expect that the PhD would lead into a job as a professor. That was not really true when I was a student but it’s even less true now. But I have to to adapt. Our students and trainees have to adapts and it’s incumbent upon us to guide and advice.

I’d be intersted in feedback on this topic.

  • Are you working on a PhD to become a professor?
  • Are you a professor wondering if you’d be able to actually get a job today?
  • Are you training students with an eye toward technical and industrial careers?

 

The Professor, the PI, and the Manager

Here’s a question that I often ask myself: How much should I be managing my lab?

I was meeting with one of my trainees the other day and this grad student mentioned that they sometimes feel like they don’t know what to do during the work day and that they sometimes feel like they are wasting a lot of their time. As a result, this student will end up going home and maybe working on a coding class, or (more often) doing non grad school things. We talked about what this student is doing and I agreed: they are wasting a lot of time, and not really working very effectively.

Before I go on, some background…

There is no shortage of direction in my lab, or at least I don’t think so. I think I have a lot of things in place. Here’s a sample:

  • I have a detailed lab manual that all my trainees have access to. I’ve sent this document to my lab members a few times, and it covers a whole range of topics about how I’d like my lab group to work.
  • We meet as a lab 2 times a week. One day is to present literature (journal club) and the other day is to discuss the current research in the lab. There are readings to prepare, discussions to lead, and I expect everyone to contribute.
  • I meet with each trainee, one-on-one, at least every other week, and we go though what each student is working on.
  • We have an active lab Slack team, every project has a channel.
  • We have a project management Google sheet with deadlines and tasks that everyone can edit, add things to, see what’s been done and what hasn’t been done.

So there is always stuff to do but I also try not to be a micromanager of my trainees. I generally assume that students will want to be learning and developing their scientific skill set. This student is someone who has been pretty set of looking for work outside of academics, and I’m a big champion of that. I am a champion of helping any of my trainees find a good path. But despite all the project management and meetings this student was feeling lost and never sure what to work on. And so they were feeling like grad school has nothing to offer in the realm of skill development for this career direction. Are my other trainees also feeling the same way?

Too much or too little?

I was kind of surprised to hear one of my students say that they don’t know what to work on, because I have been working harder than ever to make sure my lab is well structured. We’ve even dedicated several lab meetings to the topic.

The student asked what I work on during the day, and it occurred to me that I don’t always discuss my daily routine. So we met for over an hour and I showed this student what I’d been working on for the past week: an R-notebook that will accompany a manuscript I’m writing that will allow for all the analysis of an experiment to be open and transparent. We talked about how much time that’s been taking, how I spent 1-2 days optimizing the R code for a computational model. How this code will then need clear documentation. How the OSF page will also need folders for the data files, stimuli, the experimenter instructions. And how those need to be uploaded. I have been spending dozens of hours on this one small part of one component of one project within one of the several research areas in my lab, and there’s so much more to do.

Why aren’t my trainees doing the same? Why aren’t they seeing this, despite all the project management I’ve been doing?

I want to be clear, I am not trying to be critical in any way of any of my trainees. I’m not singling anyone out. They are good students, and it’s literally my job to guide and advise them. So I’m left with the feeling that they are feeling unguided, with the perception that that there’s not much to do. If I’m supposed to be the guide and they are feeling unguided, this seems like a problem with my guidance.

What can I do to help motivate?

What can I do to help them organize, feel motivated, and productive?

I expect some independence for PhD students, but am I giving them too much? I wonder if my lab would be a better training experience if I were just a bit more of a manager.

  • Should I require students to be in the lab every day?
  • Should I expect daily summaries?
  • Should I require more daily evidence that they are making progress?
  • Am I sabotaging my efforts to cultivate independence by letting them be independent?
  • Would my students be better off if I assumed more of a top down, managerial role?

I don’t know the answers to these questions. But I know that there’s a problem. I don’t want to be a boss, expecting them to punch the clock, but I also don’t want them to float without purpose.

I’d appreciate input from other PIs. How much independence is too much? Do you find that your grad students are struggling to know what to do?

If you have something to say about this, let me know in the comments.

Dealing with Failure

When the hits come, they really come hard.

I’m dealing with some significant personal/professional failures this month.

I put in for two federal operating grants this past year: one from NSERC to fund my basic cognitive science work on learning and memory and one from SSHRC to fund some relatively new research on mindfulness meditation. I worked pretty hard on these last fall.

And today I found out that neither were funded.

IMG_20180728_204415-EFFECTS

Port Stanley Beach, ON 2018

This means that for the first time in a long number of years, my lab does not have an active federal research grant. The renewal application from NSERC is particularly hard to swallow, since I’ve held multiple NSERC grants and they have a pretty high funding rate relative to other programs. I feel like the rug was pulled out from under me and worry about how to support the graduate students in my lab. I can still carry on doing good research this coming year, and I have some residual funds, but I won’t lie: this is very disappointing.

The cruelest month, the cruelest profession.

It’s often said that academic/scientific work loads heavily on dealing with failure. It’s true. I’ve had failed grants before. Rejected manuscripts. Experiments that I thought were interesting or good that fell apart with additional scrutiny. For every success, there are multiple failures. And that’s all just part of being a successful academic. Beyond that, many academics may work 6-8 years to get a PhD, do a post doc, and find themselves being rejected from one job after another. Other academics struggle with being on the tenure track and may fail to achieve that milestone.

And April really truly is the cruelest month in academics.  Students may have to deal with: rejection from grad school, med school, graduate scholarships, job applications, internships, residency programs. They worry about their final exams. Faculty worry about rejection from grants, looking for jobs, and a whole host of other things. (and at least here in Canada, we still have snow in the forecast…)

Why am I writing this?

Well, why not? I’m not going to hide these failures in shame. Or try to blame someone else. I have to look these failures in the eye, own them, take responsibility for them, and keep working. Part of that means taking the time to work through my emotions and feelings about this. That’s why I’m writing this.

I’m also writing, I guess, to say that it’s worth keeping in mind that we all deal with some kind of stress or anxiety or rejection. Even people who seem to have it together (like me, I probably seem like I have it together: recently promoted to Full Professor, respectable research output, I’ve won several teaching awards, written a textbook, and have been a kind and decent teacher and mentor to 100s of students)…we all get hits. But really, I’m doing fine. I’m still lucky. I’m still privileged. I know that others will be hurting more than I am. I have no intention to wallow in pity or fight with rage. I’m not going to stop working. Not going to stop writing, doing research or trying to improve as a teacher. Moving forward is the only way I can move.

Moving on

We all fail. The question is: What are you going to do about it?

From a personal standpoint, I’m not going to let this get me down. I’ve been in this boat before. I have several projects that are now beginning to bear fruit. I’ve had a terrific insights about some new collaborative work. I have a supportive department and I’m senior enough to weather quite a lot. (thought I’m not Job, so you don’t have to test me Lord!)

From a professional standpoint, though, I think I know what the problems were and I don’t even need to see the grant reviews or committee comments (though I will be looking at them soon). There’s only one of me and branching off into a new direction three years ago to pursue some new ideas took time away from my core program, and I think both suffered a bit as a result. That happens, and I can learn from that experience.

I’ll have to meet with my research team and students next week and give them the bad news. We’re going to need to probably have some difficult conversations about working through this, and I know this will hit some of them hard too.

It might also mean some scholarly pruning. It might mean turning off a few ideas to focus more on the basic cognitive science that’s most important to me.

Congratulations to everyone who got good news this month. Successful grants, acceptance into med school, hired, or published. Success was earned. And for those of us getting bad news: accept it, deal with it, and progress.

Now enjoy the weekend everyone.

 

The Infinity: Email Management and Engagement

It’s a cold and rainy Sunday morning in November. I’m drinking some delicious dark coffee from Balzac’s.

My wife and I are each working on different things and taking advantage of the relative morning quiet. I’m at the kitchen table working off my laptop, listening to music on my headphones, and working on overview material: looking at the emails I have to respond to. I criticize myself for procrastinating, which is in itself an extra layer of procrastinating.

Email is the engine of misery

I take a look at my work email inbox. It is not too bad for a professor. I keep it organized and the inbox contains one those things that need a reply. But there at 59 messages in there that I need to reply to;  four of these have been awaiting a reply since September. Even while I write this, I’m feeling a real sense of anxiety and conflict. On the one hand, I greatly desire to spend hours slogging though the entire list and trying to deal with backlog. I’d love to look at INBOX = 0. I think that would make me feel great (which is a strange belief to have…I have never had INBOX = 0, so how do I know it would make me feel great?) Even an hour could make a good dent and dispense with at least 2/3 of the messages.

But at the same time, I want to ignore all of it. To delete all the email. I think about Donald Knuth’s quote about email. Knuth is a computer scientist at Stanford, who developed, among other things, the “TeX” system of typesetting. He has an entry on his website about email and indicted that he does not have an address.

“Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things. What I do takes long hours of studying and uninterruptible concentration. I try to learn certain areas of computer science exhaustively; then I try to digest that knowledge into a form that is accessible to people who don’t have time for such study.”

This quote, and the idea here, has been one of the things that I really aspire to. It’s one of my favourite quotes and a guiding principle…but I can’t make the leap. Like Knuth, I also write books, articles, and I try to get to the bottom of things. But it seems like I never scratch the surface because I’m always responding to email, sending email, Tweeting and engaging on social media. Deeper analysis never happens because I’m preoccupied with this surface. I feel trapped by this.

And yet, I cannot ignore the surface level. Engagement with email is part of my job. Others depend my responding. For example, I have a now retried departmental colleague who just never responded to email, and this was very frustrating to deal with. I suspect (I know) that others picked up the slack when he failed to be responsive. I have a current colleague who is much the same. So I don’t endorse blowing off some aspects of one’s job, knowing that others will pick these pieces up. I don’t want to shirk my administrative and teaching responsibilities, even if it means I sacrifice the ability to have dedicated research and writing time.

Give and Take

In the end, I am trapped in a cage that I spend hours each day making stronger. Trapped in a pit that I work ever longer hours to make deeper. The incoming email will not stop, but one could probably slow it down by not sending any email out, by providing FAQs on my syllabus about when to email, by delegating email to TAs.

The real question is, if I give less time to email, will it take less of my time away? If so, will I use that time wisely? Or will I turn to another form of distraction. Is email the problem? Or am I the problem?

Grade Inflation at the University Level

I probably give out too many As. I am aware of this, so I may be part of the problem of grade inflation. Grade inflation has been a complaint in universities probably as long as there have been grades and as long as there have been universities.

Harvard students receive mostly As.

But the issue has been in the news recently. For example, a recent story asserted that the most frequent grade (e.i. the modal grade) at Harvard was an A. That seems a bit much. If Harvard is generally regarded as of the world’s best universities, you would think they would be able to asses their students on a better range. A great Harvard undergrad should be a rare thing, and should be much better than the average Harvard undergrad. Evidently, all Harvard undergrads are great.

One long time faculty member, says that “in recent years, he himself has taken to giving students two grades: one that shows up on their transcript and one he believes they actually deserve….“I didn’t want my students to be punished by being the only ones to suffer for getting an accurate grade,”

In this way, students know what their true grade is, but they also get a Harvard grade that will be an A so that they look good and that Harvard looks good. It’s not just Harvard, of course. This website, gradeinflation.com, lays out all details. Grades are going up everywhere…But student performance may not be.

The University is business and As are what we make.

From my perspective as a university professor, I see the pressure from all sides, and I think the primary motivating force is the degree to which universities have heavily embraced a consumer-driven model. An article The Atlantic this week got me thinking about it even more. The article points out, we (university) benefit when more students are doing well and earning scholarships. One way to make sure they can earn scholarships is to keep the grades high. It is to our benefit to have more students earning awards and scholarships.

In other words, students with As bring in money. Students with Cs do not. But this suggests that real performance assessment and knowledge mastery is subservient to cash inflow. I’m probably not the only one who feels that suggestion is true.

And of course, students, realizing they are the consumer, sort of expect a good grade for what they pay for. They get the message we are sending. Grades matter more than knowledge acquisition. Money matters more than knowledge. If they pay their tuition and fees on time, they kind of expect a good grade in return. They will occasional cheat to obtain these grades. In this context, cheating is economically rational, albeit unethical.

Is there a better system?

I am not sure what to do about this. I’m pretty sure that my giving out more Cs is not the answer, unless all universities did this. I wonder if we really even need grades? Perhaps a better system would be a simple pass/fail? Or Fail/Pass/Exceed (three way). This would suggest that students have mastered the objectives in the course and we (the University) can confidently stand behind our degree programs and say that our graduates have acquired the requisite knowledge. Is that not our mission? Does it matter to an employer if a student received an A or a B in French? Can they even use that as a metric when A is the modal grade? The employer needs to know that the student mastered the objectives for a French class and can speak French. Of course, this means that it might be tricky for graduate and professional schools to determine admission. How will medical schools know who admit if they do not have a list of students with As? Though if most students are earning As, it renders moot that point.

In the end, students, faculty, and university administrators are all partially responsible for the problem, and there is no clear solution. And lurking behind it, as is so often the case, is money.