Tag Archives: Opinion

Slow Social

When Elon Musk finalized his purchase of Twitter, I began to think about whether it would be worth staying on Twitter. I’m not alone, and there are many users wondering the same thing. Not all of it is Musk related, but often that’s a catalyst.

And even in the best case scenario, where truly harmful stuff stays off the site, Musk’s stated plan for Twitter is to make it the “the most respected advertising platform in the world” which sound really boring, to be honest. And not something I’m especially interested in.

Do I need to be here?

I’ve been thinking about how I use social media these days. Not how social media is supposed to be used, not how others use it, not how Musk and Zuck want me to use it, but how I actually use it.

Generally speaking, I use social media as a platform to get tiny bursts of approval. I like some self expression and I also like getting some positive feedback on things that I write or share. That’s just about it. It sounds hollow and self-centred. It is self-centred.

So is that how I want to use it? Or is that how Musk and Zuck want me to use it? Do they want me here just engaged enough to stay to get some small pleasure from a like or a retweet? I think they do. And why should I give in to that?

Do I need to be here? Do I even need to be anywhere?

How it got that way

I used to use social media more actively. I used Facebook and Twitter to engage, argue, and browse. I used Facebook in the mid 2010 to connect with old friends abut but also argue and comment on news articles—a “digital town square”— but I still checked to see if people liked my comments. It is gratifying to see that others agree with you. But it was also often horrifying to see family and friends doing the same thing with sometimes grotesque opinions, racism, and misogyny. Facebook ended up deepening rifts for many of us, because we now saw what people said behind our backs in full view and we did not always like it. I deleted my FB account in 2018 and only brought it back in 2022, but now it’s just keeping in touch with my family. No opinions. Bland but kind of nice. I’ll post photos of a graduation or my garden.

I moved to Twitter in 2012 because it was becoming the place to share science, ideas, and insights into academia. I did learn a lot about important things like open science and anti racism. At first I really engaged with replies and quote tweets, spirited debate with interesting people. But my usage has stalled. Twitter’s algorithms are dominated by core topics and driven mostly by outrage. Even though I eventually blocked or muted some of the accounts and words, it’s still heavily dominated by narcissistic mega personalities (including Trump & Kanye, who are no longer even on the platform).

Over the past years, I’ve engaged less and less even as my actual tweeting remained the same. I rarely argue or debate. I tweet about covid anxiety, workloads, lockdowns, and some local politics, but mostly it’s little things that make me happy.

Most of my usage now is sending out dispatches of things I like, find, enjoy, get irritated by, am worried about, or think are funny. Thoughts into the void, but still craving that small boost from a like. It’s one sided. I share and get a like. That’s neither good nor bad, but it’s hardly reason to be on twitter vs anywhere else.

I think this is as good a time as any to rethink and perhaps write more longer form and share on this WordPress site. I enjoy self expression. I might still use Facebook or Twitter to share the blog, but I don’t need to be on those sites much beyond that. It’s certainly a slower social media.

I think “slow social” is what I need anyway.

Summer Running or Winter Running: Which is Better?

I love running outside, but each season is different. And where I live, Southern Ontario, we get quite a range, with summer high temperatures up to the mid 30Cs (mid 90s in F) and wintertime lows can be -25C or lower (-13F and lower). I run all year long, so I decided to compare the to decide which was the best season for running.

A few Caveats (YMMV)

First, it should be self evident that late September – early October is actually the best time for running. It’s the best time for a lot of things. The weather is beautiful. It’s not too hot not too cold. The air is usually crisp. The days are getting shorter, but not too short. And maybe there’s some evolutionary need to get out and run, as if we need to get out and gather nuts and game meat for the long winter. Who knows, I’m not an evolutionary psychologist so I’m just making that up.

IMG_20181104_104002-EFFECTS

This is why October is everyone’s favourite Month

Second, I have to acknowledge that I have the ability and privilege to run all year. I’m able and I’m reasonable fit for 49 years old. Not everyone has that. I am fortunate to live in a city with places to run. I am fortunate to live in a city that usually plows the sidewalks even after 2 feet of snow and even plows some of the running / multi-use trails also. Not everyone has that. As well, as a white, middle age male, I can run alone without worrying about being hassled, harassed, or feeling like a suspect. Not everyone has that privilege. And I run with my wife sometimes too: it’s great have a partner.

So let’s get to it. Which is the best season for running: Summer or Winter?

Summer

Summer is a like a long weekend. June is your Friday afternoon, full of promise and excitement. July is a Saturday, it’s fun, long, and full. Yes there’s summer chores to be done and in the back of your mind, you know the end is coming, but hey, it’s summer. August is Sunday. Enjoy your brunch, but soon it’s back to school, back to reality.

Weather: Its warm and pleasant some days, but miserable on other days. A sunny day at +25 is wonderful, but a humid day with a heat index of +44C is not fun to run in. You need to get out early or late to find cooler temps in those long, hot July weeks. If you wait too long, it’s too hot.

Gear: Shorts, light shirt, quick dry hat, water, and sunscreen. That’s it. You need the hat or something to keep sweat from pouring down your face. You need to carry water, also  because you’ll be sweating.

Flora: Summer is full of life and greenery here in the Great Lakes region. There are flowers and beautiful leafy shade trees. The scent of blossoms is in the air. But there’s pollen in the air too, and that can make it hard to breath. Some days in June, I sneeze every few minutes.

IMG_20190712_194125

Summer trail runs can be sublime

Wildlife: Good and bad. You can see deer in the woods, and birds, rabbits, foxes and coyotes. That’s the good. But you will be bothered by mosquitos and flies. And if you run on trails, there are spiders and ticks. Many of my long trail runs include running through webs and brushing off spiders. Not fun. I also do a tick check.

Air: It smells great early on, as jasmine-scented summer breezes envelop you on an early morning run. But it’s also muggy, hard to breath, and ozone-y. Around here, the air can smell of pig manure (we live near agriculture) and skunks. Lots of skunks.

Risk of weather death: Low, but people do die every year because of exhaustion. Heat stroke is real possibility, though

Distractions: Mixed. On the one hand, as a university professor I have more flexibility in the summer because I am not lecturing. But there’s also more outside stuff to do. Lawn work, garden work, and coaching softball. The beach. Biking places. I feel less compelled to run on a day when I had to mow the lawn and take care of other summer chores.

Overall: Summer running is great in late May, and early June but it soon turns tedious and to be honest by July it begins to feel like a chore. The hot weather can really drain the will to move.

IMG_20190720_093157

Hot humid by the Springbank bridge in London, ON.

Winter

Winters seem very long here, even in the southern part of Canada. The days are short; the nights are long. January can seem especially brutal because the holidays are over and winter is just beginning.

Weather: Extremely variable. More so than summer. You might get a stretch of “mild” days where its -10C followed by two weeks of -25C with brutal wind. You can run in that, but the toughest part is just getting out the door. Late winter is warmer, but that presents another problem. The sidewalk or trail will melt and thaw during the day and freeze as soon as the sun goes down. A morning run or an evening run means dealing with a lot of ice.

IMG_20190126_075944

It’s cold and dark but so beautiful

Gear: Tights, windpants, hat, gloves, layers, layers, and layers. A balaclava and sunglasses might be needed. That means more laundry. Carrying water is not quite as crucial as in the summer, but you may still need to, because public rest areas will not have their water fountains turned on. The water can freeze, which is not good (and has happened to me). Ice cleats or “yak tracks” can help if you’re running on a lot of packed snow and ice

Flora: There will be evergreens and that’s pretty much it. No pollen but no shade either. And nothing to block the wind.

Wildlife: Mostly good, but there’s less of it. You’ll see cardinals and squirrels and even deer. No bugs or spiders or skunks. But in Canada, (London, ON) the geese will start to get very aggressive as they get closer to mating in the spring… Avoid!

Air: Crisp and clear. But -25C and below, it can take your breath away. You warm up quickly and it really feels great to breath the cold air.

Risk of Weather Death: Pretty low, but black ice is treacherous. You can slip and fall and really hurt yourself. Also, be aware that windchill is a real thing. A windchill of -45C is dangerous.

Distractions: Mixed. I’m busier at work, but not outside as much and so I feel more compelled to run.

Overall: Winter has many challenges, but running is offset by an elusive quality that getting outside will be an adventure.

Conclusions

The Winner: Winter running is better.

There are pros and cons to each season. But I find it easier and more enjoyable to run in the dead of winter than in the hazy days of summer.

IMG_20190126_084512

I look happy, even after a long cold run.

One reason winter is best is how the weather extremes differ in the summer and winter. Unless I go out really early or really late, a morning run in the summer means that the weather gets objectively worse as I run. Try to do an 18km run at 8:00am and by 9:30 is really getting hot! You feel exhausted. Winter is the reverse. It gets nicer and slightly warmer as I go, so I feel exhilarated.

Another reason that winter is better is just a survival feeling. Winter feels like an adventure. I have to suit up and carry more gear and I might be the only one out on a trail. Summer, on the other hand feels like a chore. Like something I have to do. I have to get the run in before it gets too hot.

My stats bear this preference out.

In January I average 40-50km/week. In July it’s between 25-30km/week. My long runs are longer in the winter. I think its because I’m just not outside as much in the winter and so the long runs keep me sane. In the summer, I’m mowing, walking, coaching, and just doing more stuff. There’s less need to run.

So that’s it. Winter running is better than summer running. But this is just my opinion. What are your thoughts? Do you agree? Do you like running when it’s hot out? Do you hate being bundles up for winter runs?

In the end it does not matter too much as long as you’re able to get outside and enjoy a run, a walk, or whatever.

 

 

Why don’t more academics engage in public debate?

Last week, Matthew Sears, a professor of classic at the University of New Brunswick, wrote a great article in MacLean’s about how academics should participate more often in public scholarship and debate.  For example, if you’re a historian and you think Steven Pinker gets the Enlightenment wrong, speak up and challenge. If you’re a developmental psychologist and you think Jean Twenge gets things wrong about kids and digital devices, speak up. And if you’re a humanities scholar or biological psychologist and you think Jordan Peterson gets archetypes, myth, or lobsters wrong, speak up and challenge! In particular, if some public scholar is writing “out of their lane” and is getting things wrong in your lane, you owe it to your field to set the record straight. (Sears didn’t say it that way, that’s my interpretation).

Sears’s article was a hit. I agreed with his thesis. And there was some great, lively discussion on Twitter, of course. Some academics pointed out that the we do engage in public debate and discourse….on Twitter. But the question remains, why don’t more academics seek out opportunities to engage in public debate? In my opinion, we do, up to a point. And there are a few reasons why we don’t. Many of these were mentioned in response to Sears’s article.

Personal Risks

One clear hurdle is that scholars who speak out, especially against very popular public figures with large online followers, may risk on-line harassment. This could be time consuming at best and life threatening at worst.

In some cases, it may be worth the risk, but in many other cases, it may not be worth risking on-line harassment to challenge a public figure. In order for the risk to be worthwhile, the public figure would need to be making very dangerous or damaging claims, and thankfully that rarely happens among public intellectuals.

Lack of Professional Support

Another hurdle that many scholars face when seeking out public debate and outreach is a lack of professional support. For example, some commenters on Sears’s article pointed out that junior scholars and people from racial minority groups, indigenous groups, and LGBTQ communities face greater public outcry than people in safer circumstances.

Another academic pointed out that universities do encourage some public engagement but there is little institutional incentive and our job performance is usually tied to teaching and research, not public debate. Unlike public speakers and public figures, whose primary job is to be public speakers, academics are teaching and doing research.

These are important challenges, but clearly these don’t apply to everyone. Tenured professors can (and should) speak out and participating in public debate when appropriate. So why don’t more academics look to be publicly engaged?

A Tradeoff

I mentioned that it’s not easy, even if we wanted to. It has to do with the tradeoff between public work and university responsibility. A full time academic might not have much time left for public debate (and vice versa, a public scholar does not have as much time left for academic work).

Some of the most outspoken public intellectuals are not or are no longer traditional “40/40/20” academics. This formula refers to the nominal workload for professors at many large research universities. We’re expected to devote 40% of our job to teaching, another 40% to our research and scholarship, and 20% to service like committee work and editorial duties. If I were to jump into a public debate with a well-known public intellectual, it might take time away from my regular work. Now maybe that’s worth it from time to time but for many of us, this is extra time or a personal project.

The decision to become (or debate with) a well-known public intellectual means a tradeoff with one’s academic work. For that reason, most of us engage with the public in ways that hew closely to our own discipline.

Steven Pinker as an Example

Steven Pinker is a full professor at Harvard, with an incredibly long bibliography of books, chapters, articles, and journal papers. He has a full CV posted so you can see what he’s up to. Mostly, he writes books. Many of them have been best sellers. I thought his “How the Mind Works” was a fantastic book, an inspirational account of the importance of cognitive science. He appears at lectures and on talk shows.

Steven Pinker is the Johnstone Family Professor of Psychology

Steven Pinker,  Rose Lincoln / Harvard University

 

But he does not seem to teach very much and it’s impossible to know if he does any departmental service work. It’s Harvard. I imagine that there’s less tedious admin work for full professors at Harvard than full professors at Western (my institution). And Pinker occupies an endowed, named position (the Johnstone Family Professor of Psychology) He would not be expected to teach or do administrative work. It would not be a rational use of his time. My point is that he’s not rank and file. Pinker, agree or disagree with his work, is an elite, public intellectual by any definition. And he’s been great in this role as a public academic cognitive scientist.

But as he strayed from cognitive science and linguistics, however, people in other fields began to complain about his work. He’s too optimistic in “Angels of our Better Nature”, some have said. He misunderstands the enlightenment in “Enlightenment Now”, others have complained. These are still important books, but they are outside his primary field. Scholars, even ones who stray into the public forum, like to stay in their lanes and don’t like it when scholars from another field encroach.

Jordan Peterson is a Special Case

Looming over this, of course, is Jordan Peterson. Peterson is not in my field, but we’re in similar cohorts: middle-aged white male, tenured, full professors of Psychology, at large, Canadian research universities (Peterson is at University of Toronto; I’m at the University of Western Ontario). Prior to becoming “The Jordan Peterson” his research impact at U of T was very good but not incredible.  (Note: you can save yourself the trouble of pointing out that my h-index is lower than Peterson’s. Like every academic, I know my score. It’s moderate. I’m cool with that).  He was known to be an excellent lecturer. By all accounts he’s always been hard working.

1024px-jordan_peterson_by_gage_skidmore

Jordan Peterson sitting at the exact same angle as Steven Pinker (cred. G. Skidmore)

By that’s not why he’s famous.

He’s mostly famous now for being opposed to Canadian bill C-16, for his YouTube videos, for being on Joe Rogan, touring with Sam Harris, for his “12 rules” book, and for being the subject of hundreds of think pieces. While he may have been a good teacher, scholar, and departmental citizen at U of T, that’s just not what he does. Not any more. There’s been a tradeoff.  Unlike Pinker, whose fame is primarily within his field as a cognitive psychologist, much of Peterson’s fame is broader and touches on other disciplines. He’s really popular.

His work as a public intellectual is no longer closely connected to his work at U of T as a personality psychologist, or his work as a clinical psychologist, or his work as a teacher. He’s no longer a 40/40/20 academic. What’s more, although he’s still affiliated with the University of Toronto, he’s been on leave. He may not return. And really, why would he? Agree or disagree with his ideas and the cult of personality that has developed around him, he can reach more people as a public speaker than as a tenured professors. And that’s what many of us, as academics, desire: we want to reach people, to teach, to inspire. Far from being “de-platformed” he’s been re-platformed. He’s exchanged the lecture hall for the O2 Arena.

It would be difficult for most academics to compete with those resources and to challenge someone like Jordan Peterson. Some academics have done so in print, though the linked article was written by Ira Wells, who teaches literature and cultural criticism at the University of Toronto. The humanities and cultural criticism are his field.  But most of us don’t regard him as an academic or a researcher but someone in a different category all together. I offer this not as a criticism but as an observation.

I wouldn’t really care too much about Petersons’s work normally, because (unlike Pinker) it’s not in my field. I did not follow his work before he became famous. I do care that some of his videos and writings have been used (by others) to marginalize trans people, including people I know and respect. I can and will stand up for those people, but unless Peterson is going to mischaracterize prototype theory, any criticism by me would be personal and not scholarly. In which case, I’m not speaking as an expert. I might as well be criticizing Dr. Oz or Alan Alda (I cannot imagine I’d ever criticize Alan Alda, BTW he’s one of my heros). It’s possible, it’s my prerogative, but I’m not really doing it as an academic. I’m just doing it. And so I don’t.

I’d be out of my element and would end up costing me time. A protracted debate with a public intellectual who is a full time speaker and public figure would eventually affect my teaching, my scholarship, my research. Unless it’s in my own field, it’s difficult to justify.

Most of us do public work within our field

There are lots of successful public intellectuals who are working in their fields. Sara Goldrick-Rab for example is a world leader on the cost of education. Susan Dynarski is well known for her economics work and also for the use of laptops in the classroom. My colleague Adrian Owen was recently awarded the OBE for his work on consciousness and vegetative state has written a terrific popular book on the topic. Daniel Levitin writes on cognition and music. The list is long.

The criticism seems start when academics fail to “stay in their lane”. The public did not object to Jordan Peterson’s work on personality and creativity, or Noam Chomsky’s work on linguistics, or even Steven Hawking’s work on black holes. When these people wrote and worked on other topics, their limitations began to show.

In the end, I think most of us as academics are happy and enthusiastic to engage in public debate, we just tend to do it in our own fields. We tend to self-promote and educate and not debate on topics we’re not experts in. As for me? I think I do my best work in the classroom. I like the outreach I can do in the formal setting. I’m working on bringing that to my next book but don’t expect me to be to far outside my element.

Not yet…we’ll see.

 

The Language of Sexual Violence

GettyImages_1043787558.0

Women’s March leaders address a rally against the confirmation of Supreme Court nominee Judge Brett Kavanaugh in front of the court building on September 24.
 Chip Somodevilla/Getty Images

The language we use to describe something can provide insights into how we think about it. For example, we all reserve words for close family members (“Mama” or “Papa”) that have special meaning and these words are often constrained by culture. And as elements of culture, there are times when the linguistic conventions can tell us something very deep about how our society think about events.

Current Events

This week (late September 2018) has been a traumatic and dramatic one. A Supreme Court nominee, Brett Kavanaugh was accused of an attempted rape 35 years ago. Both he and the accuser, Christine Blasey Ford were interviewed at a Senate hearing. And much has been written and observed about they ways they spoke and communicated during this hearing. At the same time, many women took to social media to describe their own experiences with sexual violence. I have neither academic expertise nor personal experience with sexual violence. But like many, I’ve followed these events with shock and with heartbreak.

Survivors

I’ve noticed something this week about how women who have been victims of sexual violence talk about themselves and the persons who carried out the assault. First of all, many women identify as survivors and not victims. A victim is someone who had something happen to them. A survivor is someone who has been able to overcome (or is working to overcome) those bad things. I don’t know if this is a conscious decision or not, though it could be. It is an effective way for a woman who had been a victim to show that they are a survivor. I think that many women use this term intentionally to show that they have survived something.

Part of The Self

But there is another linguistic construction that is even more interesting. I’ve noticed, especially in the news and on social media, that women say or write  my rapist” or “my abuser”,  or “my assailant”.  I don’t believe this is intentional or affected. I think this is part of the language because it’s part of how the person thinks about the event. Or maybe part of how society thinks about the event. The language suggests that women have internalized the identity of the perpetrator and that the event and the abuser has also become part of who they are as women.  It’s deep and consequential in ways that few other events are.

Of course a sexual assault would be expected to be traumatic and even life changing, but I’m struck by how this is expressed in the idioms and linguistic conventions women use to describe the event. Their language suggests some personal ownership. It’s more than a memory for an event or an episode. It’s a memory for person, a traumatic personal event, and also knowledge of the self. Autonoetic memory is deeply ingrained. It is “Indelible in the hippocampus

All of us talk this way sometimes, of course. If you say “this cat” it’s different from saying “my cat”. The former is an abstraction or general conceptual knowledge. The latter is your pet. It’s part of your identity. “My mother”, “my car”, “my smartphone” are more personal but still somewhat general. But “my heart”,  my child ‘, “my body” , and “my breath” are deeply personal and these things are just part of who we are.

Women don’t use this construction when talking about non sexual violence. They might say “the person who cut me off” or “the guy who robbed me” . Similarly, men who have been assaulted don’t use this language . They say “the man who assaulted me. “ or “the guy who punched me”, or even “the priest who abused me” . And men do not use this language to refer to people that have assaulted (e.g. “my victim“). You might occasional hear or read men refer to “my enemy or “my rival” which, I think, has the same deeper, more profound meaning as the terms used by women for sexual violence but not as traumatic. So by and large this seems to be something that women say about sexual violence specifically.

Deep and Personal Memory

So when a woman, says “my rapist“ it suggests a deep and personal knowledge.  Knowledge that has and will stay with them, affect their lives, and affect how they think about the event and themselves. Eyewitness memory is unreliable. Memory for facts and events—even personal ones—are malleable. But you don’t forget who someone is. You don’t forget the sound of your sibling’s voice. You don’t forget sight of your children. You don’t forget your address. You don’t forget your enemy…and you would not forget your abuser or your rapist.

The Cognitive Science Age

namib

Complex patterns in the Namib desert resemble neural networks.

The history of science and technology is often delineated by paradigm shifts. A paradigm shift is a fundamental change in how we view the world and our relationship with it. The big paradigm shifts are sometimes even referred to as an “age” or a “revolution”. The Space Age is a perfect example. The middle of the 20th Century saw not only an incredible increase in public awareness of space and space travel, but many of the industrial and technical advances that we now take for granted were byproducts of the Space Age. 

The Cognitive Science Age

It’s probably cliche to write this but I believe we are at the beginning of a new age, and a new, profound paradigm shift. I think we’re well into the Cognitive Science Age. I’m not sure anyone calls it that, but I think that is what truly defines the current era. And I also think that an understanding of Cognitive Science is essential for understanding our relationships with the world and with each other. 

I say this because in the 21st century, artificial intelligence, machine learning, and deep learning are now being fully realized. Every day, computers are solving problems, making decisions, and making accurate predictions about the future…about our future. Algorithms decide our behaviours in more ways that we realize. We look forward to autonomous vehicles that will depend of the simultaneous operation of many computers and algorithms. Machines will (and have) become central to almost everything.

And this is a product of Cognitive Science. As cognitive scientists, this new age is our idea, our modern Prometheus.

Cognitive Science 

Cognitive Science is an interdisciplinary field that first emerged in the 1950s and 1960s and sought to study cognition, or information processing, as its own area of study rather than as a strictly human psychological concept. As a new field, it drew from Cognitive Psychology, Philosphy, Linguistics, Economics, Computer Science, Neuroscience, and Anthropology. Although people still tend to work and train in those more established traditional fields, it seems to me that society as a whole is in debt to the interdisciplinary nature of Cognitive Science. And although it is a very diverse field, the most important aspect in my view is the connection between biology, computation, and behaviour.

The Influence of Biology

A dominant force in modern life is the algorithm, as computational engine to process information and make predictions. Learning algorithms take in information, learn to make associations, make predictions from those associations, and then adapt and change. This is referred to as machine learning, but the key here is that machines learn biologically,

For example, the algorithm (Hebbian Learning) that inspires machine learning was discovered by the psychologist and neuroscientist Donald Hebb at McGill university. Hebb’s book on the The Organization of Behaviour  in 1949 is one of the most important books written in this field and explained how neurons learn associations. This concept was refined mathematically by the Cognitive Scientists Marvin Minsky, David Rumlehart, James McLelland, Geoff Hinton, and many others. The advances we see now in machine learning and deep learning are a result of Cognitive Scientists learning how to adapt and build computer algorithms to match algorithms already seen in neurobiology. This is a critical point: It’s not just that computers can learn, but that the learning and adaptability of these systems is grounded in an understanding of neuroscience. That’s the advantage of an interdisciplinary approach.

The Influence of Behaviour 

As another example, the theoretical grounding for the AI revolution was developed by Allen Newell (a computer scientist) and Herbert Simon (an economist). Their work in the 1950s-1970 to understand human decision making and problem solving and how to model it mathematically is provided a computational approach that was grounded in an understanding of human behaviour. Again, this an advantage of the interdisciplinary approach afforded by Cognitive Science. 

The Influence of Algorithms on our Society 

Perhaps one of the most salient and immediately present ways to see the influence of Cognitive Science is in the algorithms that drive the many products that we use online. Google is many things, but at its heart, it is a search algorithm and a way to organize the knowledge in the world so that the information that a user needs can be found. The basic ideas of knowledge representation that underlie Google’s categorization of knowledge were explored early on by Cognitive Scientists like Eleanor Rosch and John Anderson in the 1970s and 1980s. 

Or consider Facebook. The company runs and designs a sophisticated algorithm that learns about what you value and makes suggestions about what you want to see more of. Or, maybe more accurately, it makes suggestions for what the algorithm predicts will help you to expand your Facebook network… predictions for what will make you use Facebook more. 

In both of these cases, Google and Facebook, the algorithms are learning to connect the information that they acquire from the user, from you, with the existing knowledge in the system to make predictions that are useful and adaptive for the users, so that the users will provide more information to the system, so that it can refine its algorithm and acquire more information, and so on. As the network grows, it seeks to become more adaptive, more effective, and more knowledgeable. This is what your brain does, too. It causes you to engage in behaviour that seeks information to refine its ability to predict and adapts. 

These networks and algorithms are societal minds; They serve the same role for society that our own network of neurons serves our body. Indeed, these algorithms can even  change society. This is something that some people fear. 

Are Fears of the Future Well Founded?

When tech CEOs and politicians worry about the dangers of AI, I think that idea is at the core of their worry. The idea that the algorithms to which we entrust increasingly more of our decision making are altering our behaviour to serve the algorithm in the same way that our brain alters our behaviour to serve our own minds and body is somethings that strikes many as unsettling and unstoppable. I think these fears are founded and unavoidable, but like any new age or paradigm shift, we should continue to approach and understand this from scientific and humanist directions. 

The Legacy of Cognitive Science

The breakthroughs of the 20th and 21st centuries arose as a result of exploring learning algorithms in biology, the instantiation of those algorithms in increasingly more powerful computers, and the relationship of both of these concepts to behaviour. The technological improvements in computing and neuroscience have enabled these ideas to become a dominant force in the modern world. Fear of a future dominated by non-human algorithms and intelligence may be unavoidable at times but and understanding of Cognitive Science is crucial to being able to survive and adapt.

 

Open Science: My List of Best Practices

IMG_20180708_144620_Bokeh

This has nothing to do with Open Science. I just piled these rocks up at Lake Huron

Are you interested in Open Science? Are you already implementing Open Science practices in your lab? Are you skeptical of Open Science? I have been all of the above and some recent debates on #sciencetwitter have been discussing the pros and cons of Open Science practices. I decided to write this article to share my experiences as I’ve been pushing my own research in the Open Science direction.

Why Open Science?

Scientists have a responsibility to communicate their work to their peers and to the public. This has always been part of the scientific method but the methods of communication have differed throughout the years and differ by fields. This essay reflects my opinions on Open Science (capitalized to reflect this as set of principles), and I also give an overview of my lab’s current practices. I’ve written about this in my lab manual (which is also open) but until I sat down to write this essay, I had not really codified how my lab and research has adopted Open Science practices. This should not be taken as a recipe for your own science, lab, and these ideas may not apply to other fields. This is just my experience trying to adopt Open Science practices in my Cognitive Psychology lab.

Caveats First

Let’s get a few things out of the way…

First, I am not an expert in open science. In fact until about 2-3 years ago, it never even occurred to me to create a reproducible archive for my data, or to ensure that I could provide analysis scripts to someone else so that they could reproduce my analysis, or that I would provide copies of all of the items / stimuli that I used in a psychology experiment. I’ve received requests for data before, but I usually handled those in a piecemeal, ad hoc fashion. If someone asked, I would put together a spreadsheet.

Second, my experience is only generalizable to other comparable fields. I work in cognitive psychology and have collected behavioural data, survey questionnaire data, and electrophysiological data. I realized data sharing can be complicated by ethics concerns for people who collect sensitive personal or health data. I realize that other fields collect complex biological data that may not lend itself well to immediate sharing.

Finally, the principles and best practices that I’m outlining here were adopted in 2018. Some of this was developed over the course of the last few years, but this is how we are running our lab now, and how we plan to run my research lab foreseeable future. That means there are still gaps: studies that were published a few years ago that have not yet been archived, papers that may not have a preprint, analyses that were done 20 years ago in SAS on the VAX 11/780 at University at Buffalo, and if anyone wants to see data from my well-cited 1998 paper on prototype and exemplar theory, I can get it, but it is not going to be easy.

Core Principles

There are many aspects to Open Science, but I am going to outline three areas that cover most of these. There will be some overlap and some aspects may be missed.

Materials and Methods

The first aspect of Open Science concerns openness with respect to methods, materials, and reproducibility. In order to satisfy this criteria, a study or experiment should be designed and written in such a way that another scientist or lab in the same field would be able to carry out the same kind of study if they wanted to. That means that any equipment that was used is described in enough detail or is readily available. This also means that computer programs that were used to carry out the study are accessible and the code is freely available. As well, in psychology, there are often visual, verbal, or auditory stimuli that participants make decisions about or questions that they answer. These should also be available.

Data and Analysis

The second aspect of Open Science concerns open availability of data that have been collected in the study. In psychology, data takes many forms, but usually refers to responses by participants on surveys, presentation of visual stimuli, recordings of EEG, data collected in an fMRI study. In other fields, it may consist of observations taken at a field station, measurements taken of an object or substance, or trajectories of objects in space. Anything that is measured, collected, analyzed for a publication should be available for other scientists in the field.

Of course, in a research study or scientific project, the data that have been collected are also processed and analyzed. Here, several decisions need to be made. It may not always be practical to share raw data, especially if things were recorded by hand in a notebook or if the digital files are so large as to be unmanageable. On the other hand, it may not be useful to publish data that have been processed and summarized too much. For most fields, there is probably a middle-ground where the data have been cleaned and minimally processed but no statistical analyses of been done, and the data have not been transformed. The path from raw data to this minimal state should be clear and transparent. In my experience so far, this is one of the most difficult decisions to make. I don’t have a solid answer yet.

In most scientific fields, data are analyzed using software and field-specific statistical techniques. Here again, several decisions need to be made while the research is being done in order to ensure that the end result is open and usable. For example, if you analyze your data with Microsoft Excel, what might be simple and straightforward to you might be uninterpretable to someone else. This is especially true if there are pivot tables, unique calculations entered into various cells, and transformations that have not been recorded. This, unfortunately, describes a large part of the data analysis I did as a graduate student in the 1990s. And I’m sure I’m not alone. Similarly, any platform that is proprietary will present limits to openness. This includes Matlab, SPSS, SAS, and other popular computational and analytic software. I think that’s why you see so many people who are moving towards Open Science practices encouraging the use of R and Python, because they are free, openly available, and they lend themselves well to scientific analysis.

Publication

The third aspect of Open Science concerns the availability of the published data and interpretations: the publication itself. This is especially important for any research that is carried out at a university or research facility that is supported by public research grants. Most of these funding agencies require that you make your research accessible.

There are several good open access research journals that make the publications freely available for anyone because the author helps to cover the cost of publication. But many traditional journals are still behind a paywall and are only available for paid subscribers. You may not see the effects of this if you’re working in a university because your institution may have a subscription to the journal. The best solution is to create a free and shareable version of your manuscript, a preprint, that is available on the web and that anyone can access but does not violate the copyright of the publisher.

Putting this in practice

I tried to put some guidelines in place in my lab to address these three aspects of open science. I started with one overriding principle: When I submit a manuscript for publication in a peer-reviewed journal, I should also ensure that at the time of submission, I have a complete data file that I can share, analysis scripts that I can share, and a preprint.

I implemented as much of this is possible with every project paper that we’ve submitted for publication since late 2017 and all our ongoing projects. We don’t submit a manuscript until we can meet the following:

  • We create a reprint of the manuscript that can be shared via a public online repository. We post this preprint to the online repository at the same time that we submit it to the journal.
  • We create shareable data files for all of the data collected in the study described in that manuscript. These are almost always unprocessed or minimally processed data in a Microsoft Excel spreadsheet or a text file. We don’t use Excel for any summary calculations, so the data are just data.
  • As we’re carrying out the data analysis, we document our analyses in R notebooks. We share the R scripts /notebooks for all of the statistical analyses and data visualizations in the manuscript. These are open and accessible and should match exactly what appears the manuscript. In some cases, we have posted R notebooks with additional data visualization beyond what is in the manuscript as a way to add value to the manuscript.
  • We also create a shareable document for any nonproprietary assessments or questionnaires that were designed for this study and copies of any visual or auditory stimuli used in the study.

Now on this list of best practices, it would be disingenuous to suggest that every single study paper from my lab meets all of those criteria. For example, one recently published study made use of Matlab instead of Python, because that’s how we knew how to analyze the data. But we’re using these principle as a guide as out work progresses. I view Open Science and these guidelines as an important and integral part of training my students. I view this as being just as important as the theoretical contributions that we’re making to the field.

Additional Resources and Suggestions

In order to achieve this goal, the following guidelines and resources have been helpful to me.

OSF

My public OFS profile lists current and recent projects. OSF stands for “open science Framework” and it’s one of many data repositories that can be used to share data, preprints, unformatted manuscripts, analysis code, and other things. I like OSF, and it’s kind of incredible to me that thus wonderful resource is free for scientists to use. But if you work at a University or public research institute, your library probably runs a public repository as well.

Preregistration

For some studies, preregistration may be helpful, additional step in carrying out the research. There are limits to preregistration, many of which are addressed with Registered Reports. At this point, we haven’t done any register reports. The preregistration is helpful though, because it encourages the researcher student to lay out a list of analyses they plan to do, to describe how the data are going to be collected, and to make that plan publicly available before the data are collected. This doesn’t mean that preregistered studies are necessarily better, but it’s one more tool to encourage openness in science.

Python and R

If you’re interested in open science it really is worth looking closely at R and Python for data manipulation, visualization, and analysis. In psychology, for example, SPSS has been a long-standing and popular way to analyze data. SPSS does have a syntax mode that allows the researcher to share their analysis protocol, but that mode of interacting with the program is much less common than the GUI version. Furthermore, SPSS is proprietary. If you don’t have a license, you can’t easily look at how the analyses were done. The same is true of data manipulation in Matlab. My university has a license, but if I want to share my data analysis with a private company, they may not have a license. But anyone in the world can install and use R and Python.

Conclusion

Science isn’t a matter of belief. Science works when people trust in the methodology, the data and interpretation, and by extension, the results. In my view, Open Science is one of the best ways to encourage scientific trust and to encourage knowledge organization and synthesis.

Cognitive Bias and the Gun Debate

171017-waldman-2nd-amendment-tease_yyhvy6

image from GETTY

I teach a course at my Canadian university on the Psychology of Thinking and in this course, we discuss topics like concept formation, decision making, and reasoning. Many of these topics lend themselves naturally to the discussion of current topics and in one class last year, after a recent mass shooting in the US, I posed the following question:

“How many of you think that the US is a dangerous place to visit?”

About 80% of the students raised their hands. This is surprising to me because although I live and work in Canada and I’m a Canadian citizen, I grew up in the US; my family still lives there and I still think it’s a reasonably safe place to visit. Most students justified their answer by referring to school shootings, gun violence, and problems with American police. Importantly, none of these students had ever actually encountered violence in the US. They were thinking about it because it has been in the news. That were making a judgment on the basis of the available evidence about the likelihood of violence.

Cognitive Bias

The example above is an example of a cognitive bias known as the Availability Heuristic. The idea, originally proposed in the early 1970s by Daniel Kahneman and Amos Tversky (Kahneman & Tversky, 1979; Tversky & Kahneman, 1974) is that people generally make judgments and decisions on the basis of the most relevant memories that they retrieve and that are available at the time that the assessment or judgement is made. In other words, when you make a judgment about a likelihood of occurrence, you search your memory and make your decision on the basis of what you remember. Most of the time, this heuristic produces useful and correct evidence. But in other cases, the available evidence may not correspond exactly to evidence in the world. For example, we typically overestimate the likelihood of shark attacks, airline accident, lottery winning, and gun violence.

Another cognitive bias (also from Kahneman and Tversky) is known as the Representativeness Heuristic. This is the general tendency to treat individuals as representative of their entire category. For example, suppose I formed concept of American gun owners as being violent (based on what I’ve read or seen in the news), I might infer that each individual American is a violent gun owner. I’d be making a generalization or a stereotype and this can lead to bias in how a treat people. As with availability, the representativeness heuristic arrises out of the natural tendency of humans to generalize information. Most of the time, this heuristic produces useful and correct evidence. But in other cases, the representative evidence may not correspond exactly to individual evidences in the world.

The Gun Debate in the US

I’ve been thinking about this a great deal as the US engages in their ongoing debate about gun violence and gun control. It’s been reported widely that the US has the highest rate of private gun ownership in the world, and also has an extraordinary rate of gun violence relative to other counties. These are facts. Of course, we all know that “correlation does not equal causation” but many strong correlations often do derive from a causal link. The most reasonable thing to do would be to begin to implement legislation that restricts access to firearms but this never happens and people are very passionate about the need to restrict guns.

So why to do we continue to argue about this? One problem that I rarely see being discussed is that many of us have limited experience with guns and/or violence and have to rely on what we know from memory and from external source and we’re susceptible to cognitive biases.

Let’s look at things from the perspective of an average American gun owner. This might be you, people you know, family, etc. Most of these gun owners are very responsible, knowledgeable, and careful. They own firearms for sport and also for personal protection and in some cases, even run successful training courses for people to learn about gun safety. From the perspective of a responsible and passionate gun owner, it seems to be quite true that the problem is not guns per se but the bad people who use them to kill others. After all, if you are safe with your guns and all your friends and family are safe, law abiding gun owners too, then those examples will be the most available evidence for you to use in a decision. And so you base your judgements about gun violence on the this available evidence and decide that gun owners are safe. As a consequence, gun violence is not a problem of guns and their owners, but must be a problem of criminals with bad intentions. Forming this generalization is an example of the availability heuristic. It my not be entirely wrong,  but it is a result of a cognitive bias.

But many people (and me also) are not gun owners. I do not own a gun but I feel safe at home. As violent crime rates decrease, the likelihood being a victim of a personal crime that a gun could prohibit is very small, Most people will never find themselves in this situation. In addition, my personal freedoms are not infringed by gun regulation and I too recognize that illegal guns are a problem. If I generalize from my experience, I may have difficulty understanding why people would need a gun in the first place whether for personal protection or for a vaguely defined “protection from tyranny”. From my perspective it’s far more sensible to focus on reducing the number of guns. After all, I don’t have one, I don’t believe I need one, so I generalize to assume that anyone who owns firearms might be suspect or irrationally fearful. Forming this generalization is also an example of the availability heuristic. It my not be entirely wrong,  but it is a result of a cognitive bias.

In each case, we are relying on cognitive biases to infer things about others and about guns. These things and inferences may be stifling the debate

How do we overcome this?

It’s not easy to overcome a bias, because these cognitive heuristics are deeply engrained and indeed arise as a necessary function of how the mind operates. They are adaptive and useful. But occasionally we need to override a bias.

Here are some proposals, but each involves taking the perspective of someone on the other side of this debate.

  1. Those of us on the left of the debate (liberals, proponents of gun regulations) should try to recognize that nearly all gun enthusiasts are safe, law abiding people who are responsible with their guns. Seen through their eyes, the problem lies with irresponsible gun owners. What’s more, the desire to place restrictions on their legally owned guns activates another cognitive bias known as the endowment effect in which people place high value on something that they already possess, the prospect of losing this is seen as aversive because it increases the feeling of uncertainty for the future.
  2. Those on the right (gun owners and enthusiasts) should consider the debate from the perspective of non gun owners and consider that proposals to regulate firearms are not attempts to seize or ban guns but rather attempts to address one aspect of the problem: the sheer number of guns in the US, any of which could potentially be used for illegal purposes. We’re not trying to ban guns, but rather to regulate them and encourage greater responsibility in their use.

I think these things are important to deal with. The US really does have a problem with gun violence. It’s disproportionally high. Solutions to this problem must recognize the reality of the large number of guns, the perspectives of non gun owners, and the perspectives of gun owners. We’re only going to do this by first recognizing these cognitive biases and them attempting to overcome them in ways that search for common ground. By recognizing this, and maybe stepping back just a bit, we can begin to have a more productive conversation.

As always: comments are welcome.

How do you plan to use your PhD?

If you follow my blog or medium account, you’ve probably already read some of my thoughts and musings on the topic of running a research lab, training graduate students, and being a mentor. I think I wrote about that just a few weeks ago. But if you haven’t read any of my previous essays, let me provide some context. I’m professor of Psychology at a large research university in Canada, the University of Western Ontario. Although we’re seen as a top choice for undergraduates because of our excellent teaching and student life, we also train physicians, engineers, lawyers, and PhD students in dozens of field. My research group fits within the larger area of Cognitive Neuroscience which is one of our university’s strengths.

Within our large group (Psychology, the Brain and Mind institute, BrainsCAN, and other groups) we have some of the very best graduate students and postdocs in the world, not to mention some of my excellent faculty colleges. I’m not writing any of this to brag or boast but rather to give the context that we’re a good place to be studying cognition, psychology and neuroscience.

And I’m not sure any of our graduates will ever get jobs as university professors.

The Current State of Affairs

Gordon Pennycook, from Waterloo and soon from University of Regina wrote an excellent blog post and paper on the job market for cognitive psychology professors in Canada. You might think this is too specialized, but he makes the case that we can probably extrapolate to other fields and counties and find the same thing. But since this is my field (and Gordon’s also) it’s easy to see how this affects students in my lab and in my program.

One thing he noted is that the average Canadian tenure-track hire now has 15 publications on their CV when hired. That’s a long CV and as long as long as what I submitted in my tenure dossier in 2008. It’s certainly a longer CV than what I had when I was hired at Western in 2003. I was hired with 7 publications (two first author) after three years as a postdoc and three years of academic job applications. And it’s certainly longer than what the most eminent cognitive psychologists had when they were hired. Michael Posner, whose work I cite to this day, was hired straight from Wisconsin with one paper. John Anderson, who’s work I admire more than any other cognitive scientists, was hired at Yale with a PhD from Stanford and 5 papers on his CV. Nancy Kanwisher was hired in 1987 with 3 papers from her PhD at UCLA.

Compare that to a recent hire in my own group, who was hired with 17 publications in great journals and was a postdoc for 5 years. Or compare that to most of our recent hires and short-listed applicants who have completed a second postdoc before they were hired.  Even our postdoctoral applicants, people applying for 2-3 year postdocs at my institution, are already postdocs and are looking to get a better postdoc to get more training and become more competitive.

So it’s really a different environment today.

The fact is, you will not get a job as a professor after finishing a PhD. Not in this field and not in most fields. Why do I say this? Well for one, it’s not possible to publish 15-17 papers during your PhD career. Not in my lab, at least. Even if added every student to every paper I published, they will not have a CV with that many papers, I simply can’t publish that many papers and keep everything straight. And I can’t really put every student on every paper anyway. If the PhD is not adequate for getting a job as a professor, what does that mean for our students, our program, and for PhD programs in general?

Expectation mismatch

Most students enter a PhD program with the idea of becoming a professor. I know this because I used to be the director of our program and that’s what nearly every student says, unless they are applying to our clinical program with the goal of being a clinician. If students are seeking a PhD to become a professor, but we can clearly see that the PhD is not sufficient, then students’ expectations are not being met by our program. We admit student to the PhD with most hoping to become university professors and then they slowly learn that it’s not possible. Our PhD is, in this scenario, merely an entry into the ever-lengthening postdoc stream which is where you prepare to be a professor. We don’t have well-thought out alternatives for any other stream.

But we can start.

Here’s my proposal

  1. We have to level with students and applicants right away that “tenure track university professor” is not going to be the end game for PhD. Even the very best students will be looking at 1-2 postdocs before they are ready for that. For academic careers, the PhD is training for the postdoc in the same way that med school is training for residency and fellowship.
  2. We need to encourage students to begin thinking about non-academic careers in their first year. This means encouraging students’ ownership of their career planning.  There are top-notch partnership programs like Mitacs and OCE (these are Canadian but programs like this exist in the US, EU and UK) that help students transition into corporate and industrial careers. We have university programs as well. And we can encourage students to look at certificate program store ensure that their skills match the market. But students won’t always know about these things if their advisors don’t know or care.
  3. We need to emphasize and cultivate a supportive atmosphere. Be open and honest with students about these things and encourage them to be open as well. Students should be encouraged to explore non-academic careers and not make to feel guilty for “quitting academia”.

I’m trying to manage these things in my own lab. It is not always easy because I was trained to all but expect that the PhD would lead into a job as a professor. That was not really true when I was a student but it’s even less true now. But I have to to adapt. Our students and trainees have to adapts and it’s incumbent upon us to guide and advice.

I’d be intersted in feedback on this topic.

  • Are you working on a PhD to become a professor?
  • Are you a professor wondering if you’d be able to actually get a job today?
  • Are you training students with an eye toward technical and industrial careers?

 

The Unintended Cruelty of America’s Immigration Policies

static.politico

Image from https://goo.gl/HtfqLa

It is well documented that the Trump administration is pursing a senselessly cruel policy of prosecuting migrants at the border, detaining families, and incarcerating them in large, improvised detention centres. This includes taking children away from their parents and siblings and housing them separately for an extended period.

Pointlessly Cruel

Jeff Sessions has pointed out that this policy is “simply enforcing the law” and that it’s a deterrent. He lays any negative conseqences on the migrant families themselves, asking why they would risk bringing their children on this long and dangerous trek. Other members of the administration have pointed out that families who claim asylum at ports of entry are not being detained or split apart. This too is disingenuous, as the Trump administration has narrowed the reasons for asylum, and as the border has become increasingly militarized, migrants and asylum-seekers are being forced away from busy ports of entry and often into dangerous crossings.

 How did we get to this point? How did a nation which once prided itself on welcoming immigrants become a nation increasingly looking to punish individuals even as they seek asylum? Although some aspects of this cruel policy have long been present in America’s history, I think that particular fixation on migration from Mexico stems from an unintended starting point.

Unintended Consequences

A recent podcast by Malcolm Gladwell explored the causes and effects of the militarized US-Mexico border. I found this podcast fascinating and I recommend listening to it. To summarize, for most of the 20th century, into the 1960s and 1970s, migration between the United States and Mexico was primarily cyclical. Migrants from rural areas near the border in Mexico would move to the United States for work, stay for a few months, and move back to Mexico with their families. This was an economic relationship and it worked because the cost of crossing the border was essentially zero. If you are apprehended, you’d be returned but otherwise it allowed for the flow of migrants into the United States and out of the United States.

In the early 1970s, however, the US-Mexico border began to be militarized. It happened almost by accident. An extremely skilled and dedicated retired Marine General took over immigration and naturalization services and began to tighten up the way in which border patrols operated. There was never any intent to cause suffering.  On the contrary, the original intent seem to be to harmonize border enforcement with existing law  in a way that benefited everyone. But what happened was that as the borders became less porous, migrants began seeking out for dangerous border crossings. Often these were in the high desert where risk of injury and death was higher, as the cost of crossing the border back and forth increased due to this danger, migrants were less likely to engage in cyclical migration but rather stayed in the United States and either send money home to Mexico or brought their families here.

This has profound implications for the current state of affairs. As each successive administration cracks down on illegal immigration, tightens the border, and militarizes the border patrol, it increases the risks and costs associated with crossing back and forth. Migrants still want to come to America, people are still claiming asylum, but illegal immigrants in the United States are persecuted and stay in hiding. Every indication is that the worst possible thing that could be done would be the actual construction of a wall.  In some ways, an analogy can be drawn to desire paths in public spaces. There is a natural flow to collective human behaviour. Civic planning and architecture does not always match, but human behaviour will always win out. People will continue to migrate and this will continue to be a problem.

Gladwell doesn’t say this, but it seems to me that the most rational and humane solution is a porous border. In a porous border, illegal immigrants are turned back when apprehended, but in a straightforward way. People are not apprehended and put into detention centers. Families are not charged with committing a misdemeanour offence and jailed prior to their hearings necessitating the removal of the children. In a porous border, there is still border security but the overall level of enforcement is lower.  In addition, a policy like this could benefit from increased access to green cards,  recognizing that many migrants wish to work in the United States for a few months. Unfortunately, no one in the Southwest (or anywhere else in America) is going to win an election with the promise of “Let’s make our border more porous and engage in lax border security.” That will not sell. But the evidence presented by the Mexican migration project and reviewed by Gladwell in his podcast suggests this would still be the most rational solution.

More Objective Research

This is one of those cases where we need more objective policy research, less political rhetoric. Has anyone asked an algorithm or computer model to determine the ideal level of border security? How much flow is tolerable? How does one balance economic detriment to having a relatively free flow of migrants with the costs associated with apprehension detention and deportation, and any associated criminal proceedings. The latter are expensive and human-resource intensive. Do to the risks of a porous border justify these expenses?

The thing is, these are computational problems. These are problems that demand rigorous computational analysis and not moralistic grandstanding about breaking the law for fears of drugs and criminals poring over the border.

The evidence seems to suggest that for decades, the relatively porous border had no ill effects on American society and was mutually beneficial to the US and to Mexican border regions. Though unintended, the slow militarization of the US-Mexico border restricted migration, made it more dangerous, which led to real costs illegal immigration thus necessitating a stronger more militaristic response, which creates a feedback loop. The harsher the enforcement the worse the problem gets.

The current administration has adopted the harshest enforcement yet, one that in my view is intentionally cruel, is a clear moral failing, and one that may be destined to fail anyway.

The Professor, the PI, and the Manager

Here’s a question that I often ask myself: How much should I be managing my lab?

I was meeting with one of my trainees the other day and this grad student mentioned that they sometimes feel like they don’t know what to do during the work day and that they sometimes feel like they are wasting a lot of their time. As a result, this student will end up going home and maybe working on a coding class, or (more often) doing non grad school things. We talked about what this student is doing and I agreed: they are wasting a lot of time, and not really working very effectively.

Before I go on, some background…

There is no shortage of direction in my lab, or at least I don’t think so. I think I have a lot of things in place. Here’s a sample:

  • I have a detailed lab manual that all my trainees have access to. I’ve sent this document to my lab members a few times, and it covers a whole range of topics about how I’d like my lab group to work.
  • We meet as a lab 2 times a week. One day is to present literature (journal club) and the other day is to discuss the current research in the lab. There are readings to prepare, discussions to lead, and I expect everyone to contribute.
  • I meet with each trainee, one-on-one, at least every other week, and we go though what each student is working on.
  • We have an active lab Slack team, every project has a channel.
  • We have a project management Google sheet with deadlines and tasks that everyone can edit, add things to, see what’s been done and what hasn’t been done.

So there is always stuff to do but I also try not to be a micromanager of my trainees. I generally assume that students will want to be learning and developing their scientific skill set. This student is someone who has been pretty set of looking for work outside of academics, and I’m a big champion of that. I am a champion of helping any of my trainees find a good path. But despite all the project management and meetings this student was feeling lost and never sure what to work on. And so they were feeling like grad school has nothing to offer in the realm of skill development for this career direction. Are my other trainees also feeling the same way?

Too much or too little?

I was kind of surprised to hear one of my students say that they don’t know what to work on, because I have been working harder than ever to make sure my lab is well structured. We’ve even dedicated several lab meetings to the topic.

The student asked what I work on during the day, and it occurred to me that I don’t always discuss my daily routine. So we met for over an hour and I showed this student what I’d been working on for the past week: an R-notebook that will accompany a manuscript I’m writing that will allow for all the analysis of an experiment to be open and transparent. We talked about how much time that’s been taking, how I spent 1-2 days optimizing the R code for a computational model. How this code will then need clear documentation. How the OSF page will also need folders for the data files, stimuli, the experimenter instructions. And how those need to be uploaded. I have been spending dozens of hours on this one small part of one component of one project within one of the several research areas in my lab, and there’s so much more to do.

Why aren’t my trainees doing the same? Why aren’t they seeing this, despite all the project management I’ve been doing?

I want to be clear, I am not trying to be critical in any way of any of my trainees. I’m not singling anyone out. They are good students, and it’s literally my job to guide and advise them. So I’m left with the feeling that they are feeling unguided, with the perception that that there’s not much to do. If I’m supposed to be the guide and they are feeling unguided, this seems like a problem with my guidance.

What can I do to help motivate?

What can I do to help them organize, feel motivated, and productive?

I expect some independence for PhD students, but am I giving them too much? I wonder if my lab would be a better training experience if I were just a bit more of a manager.

  • Should I require students to be in the lab every day?
  • Should I expect daily summaries?
  • Should I require more daily evidence that they are making progress?
  • Am I sabotaging my efforts to cultivate independence by letting them be independent?
  • Would my students be better off if I assumed more of a top down, managerial role?

I don’t know the answers to these questions. But I know that there’s a problem. I don’t want to be a boss, expecting them to punch the clock, but I also don’t want them to float without purpose.

I’d appreciate input from other PIs. How much independence is too much? Do you find that your grad students are struggling to know what to do?

If you have something to say about this, let me know in the comments.