“Correlation does not imply causation” is a well-worn phrase, an expression used to explain things, and (often) to smugly shut down an argument. There’s a meta effect in which the phrase has some causal power: it can cause an argument to be discarded.
Of course, though, correlation often does imply causation. Cause and effect are most definitely correlated. Pearson even designed his correlation coefficient as an index of the strength of causation. It’s just that correlation is not enough to allow a valid causal inference to be made.
Causality and the Cat
Sometimes, even direct causal links are not even enough to infer causation. My cat has an annoying habit that illustrates this.
Every morning at around 5:00am, without fail, she carries out a complicated routine in my bedroom. She starts by meowing loudly and then begins picking at the door to the closet. This reverberates loudly, enough to wake me. She will rattle the window blinds and sometimes open and slam the door to the room, and then start picking at that door. These are all loud enough, and unrelenting enough to cause me to get up, at 5:30, and head downstairs where I feed her. And today, she might have even caused me to write this essay.
These events, picking at the door and me getting up, are highly correlated. But are they causally linked? I like to imagine the cat thinks they are. That somewhere in the recesses of her dusty little cat-mind, she believes that she and she alone caused me to get up and feed her. This, I believe anthropomorphically, causes a sense of independent agency in the cat. She has purpose. She has power. Certainly, there is strong behavioural association and that’s why she continues to engage in the behaviour.
What is the candidate for causality?
Did the cat actually cause me to wake up, though? That’s not clear. There are many mediators that are outside her control that are also candidates for the cause. My own desire to wake up early is a cause. The motivation is already there and in fact I’d probably wake up around the same time anyway (though without the irritation that may have been caused by her routine). Another candidate is my desire to stop her from making noise that would wake others in the family. And I have to use the washroom. And I want to make coffee.
So the question I asked this morning was: Did she cause me to wake up or did she simply contribute to a larger causal model?
Or, did I actually cause her behaviours by getting up and reinforcing her actions, thus establishing and contributing to the symphony of slamming, picking, and pestering that seems like the cause but is actually the effect of my early rising habit? There really is no simple answer.
We are all creatures of habit
Casual reasoning, thinking about cause and effect and attempting to determine the cause, is not easy to do but yet we tend to do it anyway, almost without trying. Even in a fairly mundane scenario like the cat waking me up (or me waking up while the cat does behaviours) it’s easy to think about what is causing things, but it’s hard to establish casualty.
Like my cat, we too are creates of habit. We have a habit to look for causality in the world. These tendencies are reinforced by the correlations we observe. And because of that, we can’t stop from believing in causes that are correlated and easy to observe.
The language we use to describe something can provide insights into how we think about it. For example, we all reserve words for close family members (“Mama” or “Papa”) that have special meaning and these words are often constrained by culture. And as elements of culture, there are times when the linguistic conventions can tell us something very deep about how our society think about events.
This week (late September 2018) has been a traumatic and dramatic one. A Supreme Court nominee, Brett Kavanaugh was accused of an attempted rape 35 years ago. Both he and the accuser, Christine Blasey Ford were interviewed at a Senate hearing. And much has been written and observed about they ways they spoke and communicated during this hearing. At the same time, many women took to social media to describe their own experiences with sexual violence. I have neither academic expertise nor personal experience with sexual violence. But like many, I’ve followed these events with shock and with heartbreak.
I’ve noticed something this week about how women who have been victims of sexual violence talk about themselves and the persons who carried out the assault. First of all, many women identify as survivors and not victims. A victim is someone who had something happen to them. A survivor is someone who has been able to overcome (or is working to overcome) those bad things. I don’t know if this is a conscious decision or not, though it could be. It is an effective way for a woman who had been a victim to show that they are a survivor. I think that many women use this term intentionally to show that they have survived something.
Part of The Self
But there is another linguistic construction that is even more interesting. I’ve noticed, especially in the news and on social media, that women say or write“my rapist” or “my abuser”,or “my assailant”.I don’t believe this is intentional or affected. I think this is part of the language because it’s part of how the person thinks about the event. Or maybe part of how society thinks about the event. The language suggests that women have internalized the identity of the perpetrator and that the event and the abuser has also become part of who they are as women.It’s deep and consequential in ways that few other events are.
Of course a sexual assault would be expected to be traumatic and even life changing, but I’m struck by how this is expressed in the idioms and linguistic conventions women use to describe the event. Their language suggests some personal ownership. It’s more than a memory for an event or an episode. It’s a memory for person, a traumatic personal event, and also knowledge of the self. Autonoetic memory is deeply ingrained. It is “Indelible in the hippocampus“
All of us talk this way sometimes, of course. If you say “this cat” it’s different from saying “my cat”. The former is an abstraction or general conceptual knowledge. The latter is your pet. It’s part of your identity. “My mother”, “my car”, “my smartphone” are more personal but still somewhat general. But “my heart”,“my child ‘, “my body” , and “my breath” are deeply personal and these things are just part of who we are.
Women don’t use this construction when talking about non sexual violence. They might say “the person who cut me off” or “the guy who robbed me” . Similarly, men who have been assaulted don’t use this language . They say “the man who assaulted me. “ or “the guy who punched me”, or even “the priest who abused me” . And men do not use this language to refer to people that have assaulted (e.g. “my victim“). You might occasional hear or read men refer to “my enemy or “my rival” which, I think, has the same deeper, more profound meaning as the terms used by women for sexual violence but not as traumatic. So by and large this seems to be something that women say about sexual violence specifically.
Deep and Personal Memory
So when a woman, says “my rapist“ it suggests a deep and personal knowledge.Knowledge that has and will stay with them, affect their lives, and affect how they think about the event and themselves. Eyewitness memory is unreliable. Memory for facts and events—even personal ones—are malleable. But you don’t forget who someone is. You don’t forget the sound of your sibling’s voice. You don’t forget sight of your children. You don’t forget your address. You don’t forget your enemy…and you would not forget your abuser or your rapist.
The history of science and technology is often delineated by paradigm shifts. A paradigm shift is a fundamental change in how we view the world and our relationship with it. The big paradigm shifts are sometimes even referred to as an “age” or a “revolution”. The Space Age is a perfect example. The middle of the 20th Century saw not only an incredible increase in public awareness of space and space travel, but many of the industrial and technical advances that we now take for granted were byproducts of the Space Age.
The Cognitive Science Age
It’s probably cliche to write this but I believe we are at the beginning of a new age, and a new, profound paradigm shift. I think we’re well into the Cognitive Science Age. I’m not sure anyone calls it that, but I think that is what truly defines the current era. And I also think that an understanding of Cognitive Science is essential for understanding our relationships with the world and with each other.
I say this because in the 21st century, artificial intelligence, machine learning, and deep learning are now being fully realized. Every day, computers are solving problems, making decisions, and making accurate predictions about the future…about our future. Algorithms decide our behaviours in more ways that we realize. We look forward to autonomous vehicles that will depend of the simultaneous operation of many computers and algorithms. Machines will (and have) become central to almost everything.
And this is a product of Cognitive Science. As cognitive scientists, this new age is our idea, our modern Prometheus.
Cognitive Science is an interdisciplinary field that first emerged in the 1950s and 1960s and sought to study cognition, or information processing, as its own area of study rather than as a strictly human psychological concept. As a new field, it drew from Cognitive Psychology, Philosphy, Linguistics, Economics, Computer Science, Neuroscience, and Anthropology. Although people still tend to work and train in those more established traditional fields, it seems to me that society as a whole is in debt to the interdisciplinary nature of Cognitive Science. And although it is a very diverse field, the most important aspect in my view is the connection between biology, computation, and behaviour.
The Influence of Biology
A dominant force in modern life is the algorithm, as computational engine to process information and make predictions. Learning algorithms take in information, learn to make associations, make predictions from those associations, and then adapt and change. This is referred to as machine learning, but the key here is that machines learn biologically,
For example, the algorithm (Hebbian Learning) that inspires machine learning was discovered by the psychologist and neuroscientist Donald Hebb at McGill university. Hebb’s book on the The Organization of Behaviour in 1949 is one of the most important books written in this field and explained how neurons learn associations. This concept was refined mathematically by the Cognitive Scientists Marvin Minsky, David Rumlehart, James McLelland, Geoff Hinton, and many others. The advances we see now in machine learning and deep learning are a result of Cognitive Scientists learning how to adapt and build computer algorithms to match algorithms already seen in neurobiology. This is a critical point: It’s not just that computers can learn, but that the learning and adaptability of these systems is grounded in an understanding of neuroscience. That’s the advantage of an interdisciplinary approach.
The Influence of Behaviour
As another example, the theoretical grounding for the AI revolution was developed by Allen Newell (a computer scientist) and Herbert Simon (an economist). Their work in the 1950s-1970 to understand human decision making and problem solving and how to model it mathematically is provided a computational approach that was grounded in an understanding of human behaviour. Again, this an advantage of the interdisciplinary approach afforded by Cognitive Science.
The Influence of Algorithms on our Society
Perhaps one of the most salient and immediately present ways to see the influence of Cognitive Science is in the algorithms that drive the many products that we use online. Google is many things, but at its heart, it is a search algorithm and a way to organize the knowledge in the world so that the information that a user needs can be found. The basic ideas of knowledge representation that underlie Google’s categorization of knowledge were explored early on by Cognitive Scientists like Eleanor Rosch and John Anderson in the 1970s and 1980s.
Or consider Facebook. The company runs and designs a sophisticated algorithm that learns about what you value and makes suggestions about what you want to see more of. Or, maybe more accurately, it makes suggestions for what the algorithm predicts will help you to expand your Facebook network… predictions for what will make you use Facebook more.
In both of these cases, Google and Facebook, the algorithms are learning to connect the information that they acquire from the user, from you, with the existing knowledge in the system to make predictions that are useful and adaptive for the users, so that the users will provide more information to the system, so that it can refine its algorithm and acquire more information, and so on. As the network grows, it seeks to become more adaptive, more effective, and more knowledgeable. This is what your brain does, too. It causes you to engage in behaviour that seeks information to refine its ability to predict and adapts.
These networks and algorithms are societal minds; They serve the same role for society that our own network of neurons serves our body. Indeed, these algorithms can even change society. This is something that some people fear.
Are Fears of the Future Well Founded?
When tech CEOs and politicians worry about the dangers of AI, I think that idea is at the core of their worry. The idea that the algorithms to which we entrust increasingly more of our decision making are altering our behaviour to serve the algorithm in the same way that our brain alters our behaviour to serve our own minds and body is somethings that strikes many as unsettling and unstoppable. I think these fears are founded and unavoidable, but like any new age or paradigm shift, we should continue to approach and understand this from scientific and humanist directions.
The Legacy of Cognitive Science
The breakthroughs of the 20th and 21st centuries arose as a result of exploring learning algorithms in biology, the instantiation of those algorithms in increasingly more powerful computers, and the relationship of both of these concepts to behaviour. The technological improvements in computing and neuroscience have enabled these ideas to become a dominant force in the modern world. Fear of a future dominated by non-human algorithms and intelligence may be unavoidable at times but and understanding of Cognitive Science is crucial to being able to survive and adapt.
August is one of two times during the year (the other being the stretch of time between Christmas and the new year) that I try to back away from email, social media, and work reading and try to catch up on reading for pleasure.
Reading for pleasure! Is there any greater luxury than being able to read a few books on various topics, just because you have the time? I don’t think there is.
August is ideal for this. It begins with a long weekend in Canada (civic day) and often we spend a week at a vacation rental in the Bruce Peninsula at some point in the month, and the reality of Fall term has yet to hit. Our kids’ sports are over. There’s time to relax a bit.
Past August Reads
Over the years, I’ve tackled many things, and found some books and authors that I have greatly enjoyed. So here’s some recommendations from past year. One year, about a decade ago, I grabbed a copy of David Copperfield by Charles Dickens that was at a rental cottage and was immediately taken with how entertaining it was and the strength of the characters. I couldn’t finish it in a week, though, and I did not want to take the book from the cottage, so I bought the book from Amazon when I got back. So glad I finished and that led to me quickly reading Great Expectations, Hard Times, and Bleak House. At some point, I hope to finish the rest of Dickens’s books.
I also got caught up in the excellent police procedurals by Peter Robinson (Inspector Banks series), again because I came across one at a rental home and was hooked. I’ve since worked my way though about 7 of these since then. Very well written crime books.
Other recommendations from summer reads: The Caine Mutinyby Herman Wouk, probably the best fictional depiction of a collapse in leadership I’ve ever read. Bonus: there’s the fanatic movie adaptation, and it turns out that my grandfather was a US Navy seaman on the same kind of ship (destroyer) in the same South Pacific typhoon in WW II that the played a heavy role in the book’s later half, which is a cool personal connection.
I read The Orenda by Joesph Boyden on the shores of Georgian Bay not far from where the action in the novel would have been set (this was before controversy over Boyden’s identity had reached mainstream). Last year I read Winter Worldby the incredible Bernd Heinrich. He is one of the very best science writers working today and I’ve read a few of his books, this was my favourite of the bunch.
There have been some forgettable books as well, nothing bad, but things that felt like just passing the time (some sea faring adventure books by Clive Cussler, a biography of John Adams, Pinker’s How the Mind Works).
On the Shelf This Month
So what I am reading now?
I’m 1/2 way through the epic fantasy The Name of the Windby Patrick Rothfuss and it’s just fantastic. I expect to finish this soon and have lined up the following books next.
Astrophysics for People in a Hurry by Neil deGrasse Tyson. I’m not in a hurry, but my daughter got this for me last year and I looked at the first chapter, quite amazed at what an incredibly good writer Tyson is. It’s a quick read, but one that I think I’ll really enjoy.
Other Minds by Peter Godfrey Smith, is a book about the nature of consciousness and an emphasis on cognition and intelligence in cephalopods. Nonhuman intelligence (primates, social insects, birds, cephalopods, machines) is a topic that I expect to occupy more of my time and thinking in the next few years. There have been mixed reviews of this, though, so I’m approaching it with caution. I have no problem quitting a book that starts to fall apart.
Strange Affair by Peter Robinson. This will be the 8th novel I’ve read by Robinson. I’m reading this series in no particular order, I just found this in a used book store last week for 99¢ so could not pass it up.
I’m not sure I’ll get through these this month, but we’ll see. As I said, this is my month for reading for pleasure, not out of compulsion. There’s no better way to read, and I’m looking forward to enjoying the ideas, the words, the characters, and the concepts. If you’re also planning on some reading for the summer or have done some summer reading already, here’s hoping you found a new favourite book or author.
Are you interested in Open Science? Are you already implementing Open Science practices in your lab? Are you skeptical of Open Science? I have been all of the above and some recent debates on #sciencetwitter have been discussing the pros and cons of Open Science practices. I decided to write this article to share my experiences as I’ve been pushing my own research in the Open Science direction.
Why Open Science?
Scientists have a responsibility to communicate their work to their peers and to the public. This has always been part of the scientific method but the methods of communication have differed throughout the years and differ by fields. This essay reflects my opinions on Open Science (capitalized to reflect this as set of principles), and I also give an overview of my lab’s current practices. I’ve written about this in my lab manual (which is also open) but until I sat down to write this essay, I had not really codified how my lab and research has adopted Open Science practices. This should not be taken as a recipe for your own science, lab, and these ideas may not apply to other fields. This is just my experience trying to adopt Open Science practices in my Cognitive Psychology lab.
Let’s get a few things out of the way…
First, I am not an expert in open science. In fact until about 2-3 years ago, it never even occurred to me to create a reproducible archive for my data, or to ensure that I could provide analysis scripts to someone else so that they could reproduce my analysis, or that I would provide copies of all of the items / stimuli that I used in a psychology experiment. I’ve received requests for data before, but I usually handled those in a piecemeal, ad hoc fashion. If someone asked, I would put together a spreadsheet.
Second, my experience is only generalizable to other comparable fields. I work in cognitive psychology and have collected behavioural data, survey questionnaire data, and electrophysiological data. I realized data sharing can be complicated by ethics concerns for people who collect sensitive personal or health data. I realize that other fields collect complex biological data that may not lend itself well to immediate sharing.
Finally, the principles and best practices that I’m outlining here were adopted in 2018. Some of this was developed over the course of the last few years, but this is how we are running our lab now, and how we plan to run my research lab foreseeable future. That means there are still gaps: studies that were published a few years ago that have not yet been archived, papers that may not have a preprint, analyses that were done 20 years ago in SAS on the VAX 11/780 at University at Buffalo, and if anyone wants to see data from my well-cited 1998 paper on prototype and exemplar theory, I can get it, but it is not going to be easy.
There are many aspects to Open Science, but I am going to outline three areas that cover most of these. There will be some overlap and some aspects may be missed.
Materials and Methods
The first aspect of Open Science concerns openness with respect to methods, materials, and reproducibility. In order to satisfy this criteria, a study or experiment should be designed and written in such a way that another scientist or lab in the same field would be able to carry out the same kind of study if they wanted to. That means that any equipment that was used is described in enough detail or is readily available. This also means that computer programs that were used to carry out the study are accessible and the code is freely available. As well, in psychology, there are often visual, verbal, or auditory stimuli that participants make decisions about or questions that they answer. These should also be available.
Data and Analysis
The second aspect of Open Science concerns open availability of data that have been collected in the study. In psychology, data takes many forms, but usually refers to responses by participants on surveys, presentation of visual stimuli, recordings of EEG, data collected in an fMRI study. In other fields, it may consist of observations taken at a field station, measurements taken of an object or substance, or trajectories of objects in space. Anything that is measured, collected, analyzed for a publication should be available for other scientists in the field.
Of course, in a research study or scientific project, the data that have been collected are also processed and analyzed. Here, several decisions need to be made. It may not always be practical to share raw data, especially if things were recorded by hand in a notebook or if the digital files are so large as to be unmanageable. On the other hand, it may not be useful to publish data that have been processed and summarized too much. For most fields, there is probably a middle-ground where the data have been cleaned and minimally processed but no statistical analyses of been done, and the data have not been transformed. The path from raw data to this minimal state should be clear and transparent. In my experience so far, this is one of the most difficult decisions to make. I don’t have a solid answer yet.
In most scientific fields, data are analyzed using software and field-specific statistical techniques. Here again, several decisions need to be made while the research is being done in order to ensure that the end result is open and usable. For example, if you analyze your data with Microsoft Excel, what might be simple and straightforward to you might be uninterpretable to someone else. This is especially true if there are pivot tables, unique calculations entered into various cells, and transformations that have not been recorded. This, unfortunately, describes a large part of the data analysis I did as a graduate student in the 1990s. And I’m sure I’m not alone. Similarly, any platform that is proprietary will present limits to openness. This includes Matlab, SPSS, SAS, and other popular computational and analytic software. I think that’s why you see so many people who are moving towards Open Science practices encouraging the use of R and Python, because they are free, openly available, and they lend themselves well to scientific analysis.
The third aspect of Open Science concerns the availability of the published data and interpretations: the publication itself. This is especially important for any research that is carried out at a university or research facility that is supported by public research grants. Most of these funding agencies require that you make your research accessible.
There are several good open access research journals that make the publications freely available for anyone because the author helps to cover the cost of publication. But many traditional journals are still behind a paywall and are only available for paid subscribers. You may not see the effects of this if you’re working in a university because your institution may have a subscription to the journal. The best solution is to create a free and shareable version of your manuscript, a preprint, that is available on the web and that anyone can access but does not violate the copyright of the publisher.
Putting this in practice
I tried to put some guidelines in place in my lab to address these three aspects of open science. I started with one overriding principle: When I submit a manuscript for publication in a peer-reviewed journal, I should also ensure that at the time of submission, I have a complete data file that I can share, analysis scripts that I can share, and a preprint.
I implemented as much of this is possible with every project paper that we’ve submitted for publication since late 2017 and all our ongoing projects. We don’t submit a manuscript until we can meet the following:
We create a reprint of the manuscript that can be shared via a public online repository. We post this preprint to the online repository at the same time that we submit it to the journal.
We create shareable data files for all of the data collected in the study described in that manuscript. These are almost always unprocessed or minimally processed data in a Microsoft Excel spreadsheet or a text file. We don’t use Excel for any summary calculations, so the data are just data.
As we’re carrying out the data analysis, we document our analyses in R notebooks. We share the R scripts /notebooks for all of the statistical analyses and data visualizations in the manuscript. These are open and accessible and should match exactly what appears the manuscript. In some cases, we have posted R notebooks with additional data visualization beyond what is in the manuscript as a way to add value to the manuscript.
We also create a shareable document for any nonproprietary assessments or questionnaires that were designed for this study and copies of any visual or auditory stimuli used in the study.
Now on this list of best practices, it would be disingenuous to suggest that every single study paper from my lab meets all of those criteria. For example, one recently published study made use of Matlab instead of Python, because that’s how we knew how to analyze the data. But we’re using these principle as a guide as out work progresses. I view Open Science and these guidelines as an important and integral part of training my students. I view this as being just as important as the theoretical contributions that we’re making to the field.
Additional Resources and Suggestions
In order to achieve this goal, the following guidelines and resources have been helpful to me.
My public OFS profile lists current and recent projects. OSF stands for “open science Framework” and it’s one of many data repositories that can be used to share data, preprints, unformatted manuscripts, analysis code, and other things. I like OSF, and it’s kind of incredible to me that thus wonderful resource is free for scientists to use. But if you work at a University or public research institute, your library probably runs a public repository as well.
For some studies, preregistration may be helpful, additional step in carrying out the research. There are limits to preregistration, many of which are addressed with Registered Reports. At this point, we haven’t done any register reports. The preregistration is helpful though, because it encourages the researcher student to lay out a list of analyses they plan to do, to describe how the data are going to be collected, and to make that plan publicly available before the data are collected. This doesn’t mean that preregistered studies are necessarily better, but it’s one more tool to encourage openness in science.
Python and R
If you’re interested in open science it really is worth looking closely at R and Python for data manipulation, visualization, and analysis. In psychology, for example, SPSS has been a long-standing and popular way to analyze data. SPSS does have a syntax mode that allows the researcher to share their analysis protocol, but that mode of interacting with the program is much less common than the GUI version. Furthermore, SPSS is proprietary. If you don’t have a license, you can’t easily look at how the analyses were done. The same is true of data manipulation in Matlab. My university has a license, but if I want to share my data analysis with a private company, they may not have a license. But anyone in the world can install and use R and Python.
Science isn’t a matter of belief. Science works when people trust in the methodology, the data and interpretation, and by extension, the results. In my view, Open Science is one of the best ways to encourage scientific trust and to encourage knowledge organization and synthesis.
I teach a course at my Canadian university on the Psychology of Thinking and in this course, we discuss topics like concept formation, decision making, and reasoning. Many of these topics lend themselves naturally to the discussion of current topics and in one class last year, after a recent mass shooting in the US, I posed the following question:
“How many of you think that the US is a dangerous place to visit?”
About 80% of the students raised their hands. This is surprising to me because although I live and work in Canada and I’m a Canadian citizen, I grew up in the US; my family still lives there and I still think it’s a reasonably safe place to visit. Most students justified their answer by referring to school shootings, gun violence, and problems with American police. Importantly, none of these students had ever actually encountered violence in the US. They were thinking about it because it has been in the news. That were making a judgment on the basis of the available evidence about the likelihood of violence.
The example above is an example of a cognitive bias known as the Availability Heuristic. The idea, originally proposed in the early 1970s by Daniel Kahneman and Amos Tversky (Kahneman & Tversky, 1979; Tversky & Kahneman, 1974) is that people generally make judgments and decisions on the basis of the most relevant memories that they retrieve and that are available at the time that the assessment or judgement is made. In other words, when you make a judgment about a likelihood of occurrence, you search your memory and make your decision on the basis of what you remember. Most of the time, this heuristic produces useful and correct evidence. But in other cases, the available evidence may not correspond exactly to evidence in the world. For example, we typically overestimate the likelihood of shark attacks, airline accident, lottery winning, and gun violence.
Another cognitive bias (also from Kahneman and Tversky) is known as the Representativeness Heuristic. This is the general tendency to treat individuals as representative of their entire category. For example, suppose I formed concept of American gun owners as being violent (based on what I’ve read or seen in the news), I might infer that each individual American is a violent gun owner. I’d be making a generalization or a stereotype and this can lead to bias in how a treat people. As with availability, the representativeness heuristic arrises out of the natural tendency of humans to generalize information. Most of the time, this heuristic produces useful and correct evidence. But in other cases, the representative evidence may not correspond exactly to individual evidences in the world.
The Gun Debate in the US
I’ve been thinking about this a great deal as the US engages in their ongoing debate about gun violence and gun control. It’s been reported widely that the US has the highest rate of private gun ownership in the world, and also has an extraordinary rate of gun violence relative to other counties. These are facts. Of course, we all know that “correlation does not equal causation” but many strong correlations often do derive from a causal link. The most reasonable thing to do would be to begin to implement legislation that restricts access to firearms but this never happens and people are very passionate about the need to restrict guns.
So why to do we continue to argue about this? One problem that I rarely see being discussed is that many of us have limited experience with guns and/or violence and have to rely on what we know from memory and from external source and we’re susceptible to cognitive biases.
Let’s look at things from the perspective of an average American gun owner. This might be you, people you know, family, etc. Most of these gun owners are very responsible, knowledgeable, and careful. They own firearms for sport and also for personal protection and in some cases, even run successful training courses for people to learn about gun safety. From the perspective of a responsible and passionate gun owner, it seems to be quite true that the problem is not guns per se but the bad people who use them to kill others. After all, if you are safe with your guns and all your friends and family are safe, law abiding gun owners too, then those examples will be the most available evidence for you to use in a decision. And so you base your judgements about gun violence on the this available evidence and decide that gun owners are safe. As a consequence, gun violence is not a problem of guns and their owners, but must be a problem of criminals with bad intentions. Forming this generalization is an example of the availability heuristic. It my not be entirely wrong, but it is a result of a cognitive bias.
But many people (and me also) are not gun owners. I do not own a gun but I feel safe at home. As violent crime rates decrease, the likelihood being a victim of a personal crime that a gun could prohibit is very small, Most people will never find themselves in this situation. In addition, my personal freedoms are not infringed by gun regulation and I too recognize that illegal guns are a problem. If I generalize from my experience, I may have difficulty understanding why people would need a gun in the first place whether for personal protection or for a vaguely defined “protection from tyranny”. From my perspective it’s far more sensible to focus on reducing the number of guns. After all, I don’t have one, I don’t believe I need one, so I generalize to assume that anyone who owns firearms might be suspect or irrationally fearful. Forming this generalization is also an example of the availability heuristic. It my not be entirely wrong, but it is a result of a cognitive bias.
In each case, we are relying on cognitive biases to infer things about others and about guns. These things and inferences may be stifling the debate
How do we overcome this?
It’s not easy to overcome a bias, because these cognitive heuristics are deeply engrained and indeed arise as a necessary function of how the mind operates. They are adaptive and useful. But occasionally we need to override a bias.
Here are some proposals, but each involves taking the perspective of someone on the other side of this debate.
Those of us on the left of the debate (liberals, proponents of gun regulations) should try to recognize that nearly all gun enthusiasts are safe, law abiding people who are responsible with their guns. Seen through their eyes, the problem lies with irresponsible gun owners. What’s more, the desire to place restrictions on their legally owned guns activates another cognitive bias known as theendowment effectin which people place high value on something that they already possess, the prospect of losing this is seen as aversive because it increases the feeling of uncertainty for the future.
Those on the right (gun owners and enthusiasts) should consider the debate from the perspective of non gun owners and consider that proposals to regulate firearms are not attempts to seize or ban guns but rather attempts to address one aspect of the problem: the sheer number of guns in the US, any of which could potentially be used for illegal purposes. We’re not trying to ban guns, but rather to regulate them and encourage greater responsibility in their use.
I think these things are important to deal with. The US really does have a problem with gun violence. It’s disproportionally high. Solutions to this problem must recognize the reality of the large number of guns, the perspectives of non gun owners, and the perspectives of gun owners. We’re only going to do this by first recognizing these cognitive biases and them attempting to overcome them in ways that search for common ground. By recognizing this, and maybe stepping back just a bit, we can begin to have a more productive conversation.
If you follow my blog or medium account, you’ve probably already read some of my thoughts and musings on the topic of running a research lab, training graduate students, and being a mentor. I think I wrote about that just a few weeks ago. But if you haven’t read any of my previous essays, let me provide some context. I’m professor of Psychology at a large research university in Canada, the University of Western Ontario. Although we’re seen as a top choice for undergraduates because of our excellent teaching and student life, we also train physicians, engineers, lawyers, and PhD students in dozens of field. My research group fits within the larger area of Cognitive Neuroscience which is one of our university’s strengths.
Within our large group (Psychology, the Brain and Mind institute, BrainsCAN, and other groups) we have some of the very best graduate students and postdocs in the world, not to mention some of my excellent faculty colleges. I’m not writing any of this to brag or boast but rather to give the context that we’re a good place to be studying cognition, psychology and neuroscience.
And I’m not sure any of our graduates will ever get jobs as university professors.
The Current State of Affairs
Gordon Pennycook, from Waterloo and soon from University of Regina wrote an excellent blog post and paper on the job market for cognitive psychology professors in Canada. You might think this is too specialized, but he makes the case that we can probably extrapolate to other fields and counties and find the same thing. But since this is my field (and Gordon’s also) it’s easy to see how this affects students in my lab and in my program.
One thing he noted is that the average Canadian tenure-track hire now has 15 publications on their CV when hired. That’s a long CV and as long as long as what I submitted in my tenure dossier in 2008. It’s certainly a longer CV than what I had when I was hired at Western in 2003. I was hired with 7 publications (two first author) after three years as a postdoc and three years of academic job applications. And it’s certainly longer than what the most eminent cognitive psychologists had when they were hired. Michael Posner, whose work I cite to this day, was hired straight from Wisconsin with one paper. John Anderson, who’s work I admire more than any other cognitive scientists, was hired at Yale with a PhD from Stanford and 5 papers on his CV. Nancy Kanwisher was hired in 1987 with 3 papers from her PhD at UCLA.
Compare that to a recent hire in my own group, who was hired with 17 publications in great journals and was a postdoc for 5 years. Or compare that to most of our recent hires and short-listed applicants who have completed a second postdoc before they were hired. Even our postdoctoral applicants, people applying for 2-3 year postdocs at my institution, are already postdocs and are looking to get a better postdoc to get more training and become more competitive.
So it’s really a different environment today.
The fact is, you will not get a job as a professor after finishing a PhD. Not in this field and not in most fields. Why do I say this? Well for one, it’s not possible to publish 15-17 papers during your PhD career. Not in my lab, at least. Even if added every student to every paper I published, they will not have a CV with that many papers, I simply can’t publish that many papers and keep everything straight. And I can’t really put every student on every paper anyway. If the PhD is not adequate for getting a job as a professor, what does that mean for our students, our program, and for PhD programs in general?
Most students enter a PhD program with the idea of becoming a professor. I know this because I used to be the director of our program and that’s what nearly every student says, unless they are applying to our clinical program with the goal of being a clinician. If students are seeking a PhD to become a professor, but we can clearly see that the PhD is not sufficient, then students’ expectations are not being met by our program. We admit student to the PhD with most hoping to become university professors and then they slowly learn that it’s not possible. Our PhD is, in this scenario, merely an entry into the ever-lengthening postdoc stream which is where you prepare to be a professor. We don’t have well-thought out alternatives for any other stream.
But we can start.
Here’s my proposal
We have to level with students and applicants right away that “tenure track university professor” is not going to be the end game for PhD. Even the very best students will be looking at 1-2 postdocs before they are ready for that. For academic careers, the PhD is training for the postdoc in the same way that med school is training for residency and fellowship.
We need to encourage students to begin thinking about non-academic careers in their first year. This means encouraging students’ ownership of their career planning. There are top-notch partnership programs like Mitacs and OCE (these are Canadian but programs like this exist in the US, EU and UK) that help students transition into corporate and industrial careers. We have university programs as well. And we can encourage students to look at certificate program store ensure that their skills match the market. But students won’t always know about these things if their advisors don’t know or care.
We need to emphasize and cultivate a supportive atmosphere. Be open and honest with students about these things and encourage them to be open as well. Students should be encouraged to explore non-academic careers and not make to feel guilty for “quitting academia”.
I’m trying to manage these things in my own lab. It is not always easy because I was trained to all but expect that the PhD would lead into a job as a professor. That was not really true when I was a student but it’s even less true now. But I have to to adapt. Our students and trainees have to adapts and it’s incumbent upon us to guide and advice.
I’d be intersted in feedback on this topic.
Are you working on a PhD to become a professor?
Are you a professor wondering if you’d be able to actually get a job today?
Are you training students with an eye toward technical and industrial careers?
It is well documented that the Trump administration is pursing a senselessly cruel policy of prosecuting migrants at the border, detaining families, and incarcerating them in large, improvised detention centres. This includes taking children away from their parents and siblings and housing them separately for an extended period.
Jeff Sessions has pointed out that this policy is “simply enforcing the law” and that it’s a deterrent. He lays any negative conseqences on the migrant families themselves, asking why they would risk bringing their children on this long and dangerous trek. Other members of the administration have pointed out that families who claim asylum at ports of entry are not being detained or split apart. This too is disingenuous, as the Trump administration has narrowed the reasons for asylum, and as the border has become increasingly militarized, migrants and asylum-seekers are being forced away from busy ports of entry and often into dangerous crossings.
How did we get to this point? How did a nation which once prided itself on welcoming immigrants become a nation increasingly looking to punish individuals even as they seek asylum? Although some aspects of this cruel policy have long been present in America’s history, I think that particular fixation on migration from Mexico stems from an unintended starting point.
A recent podcast by Malcolm Gladwell explored the causes and effects of the militarized US-Mexico border. I found this podcast fascinating and I recommend listening to it. To summarize, for most of the 20th century, into the 1960s and 1970s, migration between the United States and Mexico was primarily cyclical. Migrants from rural areas near the border in Mexico would move to the United States for work, stay for a few months, and move back to Mexico with their families. This was an economic relationship and it worked because the cost of crossing the border was essentially zero. If you are apprehended, you’d be returned but otherwise it allowed for the flow of migrants into the United States and out of the United States.
In the early 1970s, however, the US-Mexico border began to be militarized. It happened almost by accident. An extremely skilled and dedicated retired Marine General took over immigration and naturalization services and began to tighten up the way in which border patrols operated. There was never any intent to cause suffering. On the contrary, the original intent seem to be to harmonize border enforcement with existing law in a way that benefited everyone. But what happened was that as the borders became less porous, migrants began seeking out for dangerous border crossings. Often these were in the high desert where risk of injury and death was higher, as the cost of crossing the border back and forth increased due to this danger, migrants were less likely to engage in cyclical migration but rather stayed in the United States and either send money home to Mexico or brought their families here.
This has profound implications for the current state of affairs. As each successive administration cracks down on illegal immigration, tightens the border, and militarizes the border patrol, it increases the risks and costs associated with crossing back and forth. Migrants still want to come to America, people are still claiming asylum, but illegal immigrants in the United States are persecuted and stay in hiding. Every indication is that the worst possible thing that could be done would be the actual construction of a wall. In some ways, an analogy can be drawn to desire paths in public spaces. There is a natural flow to collective human behaviour. Civic planning and architecture does not always match, but human behaviour will always win out. People will continue to migrate and this will continue to be a problem.
Gladwell doesn’t say this, but it seems to me that the most rational and humane solution is a porous border. In a porous border, illegal immigrants are turned back when apprehended, but in a straightforward way. People are not apprehended and put into detention centers. Families are not charged with committing a misdemeanour offence and jailed prior to their hearings necessitating the removal of the children. In a porous border, there is still border security but the overall level of enforcement is lower. In addition, a policy like this could benefit from increased access to green cards, recognizing that many migrants wish to work in the United States for a few months. Unfortunately, no one in the Southwest (or anywhere else in America) is going to win an election with the promise of “Let’s make our border more porous and engage in lax border security.” That will not sell. But the evidence presented by the Mexican migration project and reviewed by Gladwell in his podcast suggests this would still be the most rational solution.
More Objective Research
This is one of those cases where we need more objective policy research, less political rhetoric. Has anyone asked an algorithm or computer model to determine the ideal level of border security? How much flow is tolerable? How does one balance economic detriment to having a relatively free flow of migrants with the costs associated with apprehension detention and deportation, and any associated criminal proceedings. The latter are expensive and human-resource intensive. Do to the risks of a porous border justify these expenses?
The thing is, these are computational problems. These are problems that demand rigorous computational analysis and not moralistic grandstanding about breaking the law for fears of drugs and criminals poring over the border.
The evidence seems to suggest that for decades, the relatively porous border had no ill effects on American society and was mutually beneficial to the US and to Mexican border regions. Though unintended, the slow militarization of the US-Mexico border restricted migration, made it more dangerous, which led to real costs illegal immigration thus necessitating a stronger more militaristic response, which creates a feedback loop. The harsher the enforcement the worse the problem gets.
The current administration has adopted the harshest enforcement yet, one that in my view is intentionally cruel, is a clear moral failing, and one that may be destined to fail anyway.
Knowing something about the basic functional architecture of the brain is helpful in understanding the organization of the mind and in understanding how we think and behave. But when we talk about the brain, it’s nearly impossible to do so without using conceptual metaphors (when we talk about most things, it’s impossible to do so without metaphors).
Conceptual metaphor theory is a broad theory of language and thinking from the extraordinary linguist George Lakoff. One of the basic ideas is that we think about things and organize the world into concepts in ways that correspond to how we talk about them. It’s not just that language directs thought (that’s Whorf’s idea), but that these two things are linked and our language also provides a window into how we think about things.
Probably the most common metaphor for the brain is the “brain is a computer” metaphor, but there are other, older ideas.
The hydraulic brain
One interesting metaphor for brain and mind is the hydraulic metaphor. This probably goes back at least to Descartes (and probably earlier), who advocated a model of neural function whereby basic functions were governed by a series of tubes carrying “spirits” or vital fluids. In Descartes model, higher order thinking was handled by a separate mind that was not quite in the body. You might laugh at the ideas of brain tubes, but this idea seems quite reasonable as a theory from an era when bodily fluids were the most obvious indicators of health, sickness, and simply being alive: blood, discharge, urine, pus, bile, and other fluids are all indicators of things either working well or not working well. And when they stop, you stop. In Descartes time, these were the primary ways to understand the human body. So in the absence of other information about how thoughts and cognition occur it makes sense that early philosophers and physiologists would make an initial guess that thoughts in the brain are also a function of fluids.
Metaphors for thinking
This idea, no longer endorsed, lives on in our language in the conceptual metaphors we use to talk about the brain and mind. We often talk about cognition and thinking as information “flowing” as in the same way that fluid might flow. We have common expressions in English like the “stream of consciousness” or “waves of anxiety”, “deep thinking”, “shallow thinking”, ideas that “come to the surface”, and memories that come “flooding back” when you encounter an old friend. These all have their roots (“roots” is another conceptual metaphor of a different kind!) in the older idea that thinking and brain function are controlled by the flow of fluids through the tubes in the brain.
In the modern era, it sis still common to discuss neural activation as a “flow of information”. We might say that information “flows downstream”, or that there is a “cascade” of neural activity. Of course we don’t really mean that neural activation and cognition are flowing like water, but like so many metaphors it’s just impossible to describe things without using these expressions and in doing so, activating the common, conceptual metaphor that thinking is a fluid process.
There are other metaphors as well (like the electricity metaphor, behaviours being “hard wired”, getting “wires crossed”, an idea that “lights up”) but I think the hydraulic metaphor is my favourite because it captures the idea that cognition is fluid. We can dip our toes in the stream or hold back floods. And as you can seen from earlier posts, I have something of a soft spot for river metaphors.
Here’s a question that I often ask myself: How much should I be managing my lab?
I was meeting with one of my trainees the other day and this grad student mentioned that they sometimes feel like they don’t know what to do during the work day and that they sometimes feel like they are wasting a lot of their time. As a result, this student will end up going home and maybe working on a coding class, or (more often) doing non grad school things. We talked about what this student is doing and I agreed: they are wasting a lot of time, and not really working very effectively.
Before I go on, some background…
There is no shortage of direction in my lab, or at least I don’t think so. I think I have a lot of things in place. Here’s a sample:
I have a detailed lab manual that all my trainees have access to. I’ve sent this document to my lab members a few times, and it covers a whole range of topics about how I’d like my lab group to work.
We meet as a lab 2 times a week. One day is to present literature (journal club) and the other day is to discuss the current research in the lab. There are readings to prepare, discussions to lead, and I expect everyone to contribute.
I meet with each trainee, one-on-one, at least every other week, and we go though what each student is working on.
We have an active lab Slack team, every project has a channel.
We have a project management Google sheet with deadlines and tasks that everyone can edit, add things to, see what’s been done and what hasn’t been done.
So there is always stuff to do but I also try not to be a micromanager of my trainees. I generally assume that students will want to be learning and developing their scientific skill set. This student is someone who has been pretty set of looking for work outside of academics, and I’m a big champion of that. I am a champion of helping any of my trainees find a good path. But despite all the project management and meetings this student was feeling lost and never sure what to work on. And so they were feeling like grad school has nothing to offer in the realm of skill development for this career direction. Are my other trainees also feeling the same way?
Too much or too little?
I was kind of surprised to hear one of my students say that they don’t know what to work on, because I have been working harder than ever to make sure my lab is well structured. We’ve even dedicated several lab meetings to the topic.
The student asked what I work on during the day, and it occurred to me that I don’t always discuss my daily routine. So we met for over an hour and I showed this student what I’d been working on for the past week: an R-notebook that will accompany a manuscript I’m writing that will allow for all the analysis of an experiment to be open and transparent. We talked about how much time that’s been taking, how I spent 1-2 days optimizing the R code for a computational model. How this code will then need clear documentation. How the OSF page will also need folders for the data files, stimuli, the experimenter instructions. And how those need to be uploaded. I have been spending dozens of hours on this one small part of one component of one project within one of the several research areas in my lab, and there’s so much more to do.
Why aren’t my trainees doing the same? Why aren’t they seeing this, despite all the project management I’ve been doing?
I want to be clear, I am not trying to be critical in any way of any of my trainees. I’m not singling anyone out. They are good students, and it’s literally my job to guide and advise them. So I’m left with the feeling that they are feeling unguided, with the perception that that there’s not much to do. If I’m supposed to be the guide and they are feeling unguided, this seems like a problem with my guidance.
What can I do to help motivate?
What can I do to help them organize, feel motivated, and productive?
I expect some independence for PhD students, but am I giving them too much? I wonder if my lab would be a better training experience if I were just a bit more of a manager.
Should I require students to be in the lab every day?
Should I expect daily summaries?
Should I require more daily evidence that they are making progress?
Am I sabotaging my efforts to cultivate independence by letting them be independent?
Would my students be better off if I assumed more of a top down, managerial role?
I don’t know the answers to these questions. But I know that there’s a problem. I don’t want to be a boss, expecting them to punch the clock, but I also don’t want them to float without purpose.
I’d appreciate input from other PIs. How much independence is too much? Do you find that your grad students are struggling to know what to do?
If you have something to say about this, let me know in the comments.