Tag Archives: cognition

Psychology and the Art of Dishwasher Maintenance

The Importance of Knowing

It’s useful and powerful to know how something works. The cliché that “knowledge is power” may be a common and overused expression but that does not mean it is inaccurate.  Let me illustrate this idea with a story from a different area. I use this rhetorical device often, by the way. I frequently try to illustrate one idea with an analogy from another area. It’s probably a result of being a professor and lecturer for so many years. I try to show the connection between concepts and different examples. It can be helpful and can aid understanding. It can also be an annoying habit.

My analogy has to do with a dishwasher appliance. I remember the first time I figured out how to repair the dishwasher in my kitchen. It’s kind of a mystery how the dishwasher even works, because you never see it working (unless you do this). You just load the dishes, add the detergent, close the door, and start the machine. It runs its cycle out of direct view and when the washing cycle is finished, clean dishes emerge. So there’s an input, some internal state where something happens, and an output. We know what happens, but not exactly how it happens. We usually study psychology and cognition in the same way. We can know a lot about what’s going in and what’s coming out. We don’t know as much about what’s going in inside because we can’t directly observe it. But we can make inferences about what’s happening based on the function.

The Dishwasher Metaphor of the Mind

So let’s use this idea for bit. Let’s call it the “dishwasher metaphor“. The dishwasher metaphor for the mind assumes that we can observe the inputs and outputs of psychological processes, but not their internal states. We can make guesses about how the dishwasher achieves its primary function of creating clean dishes based on what we can observe about the input and output. We can also make guesses about the dishwasher’s functions by taking a look at a dishwasher that is not running and examining the parts. We also can make guesses about the dishwasher’s functions by observing what happens when it is not operating properly. And we can even make guesses about the dishwasher’s functions by experimenting with changing the input, changing how we load the dishes for example, and observing how that might affect the outputs. But most of this is careful, systematic guessing. We can’t actually observe the internal behaviour of the dishwasher. It’s mostly hidden from our view, impenetrable. Psychological science turns out to be a lot like trying to figure out how the dishwasher works. For better or worse, science often involves careful, systematic guessing

Fixing the Broken Dishwasher

The dishwasher in my house was a pretty standard early 2000s model by Whirlpool, though sold under the KitchenAid brand. It worked really well for years, but at some point, I started to notice that the dishes weren’t getting as clean as they used to. Not knowing what else to do, I tried to clean it by running it empty. This didn’t help. It seemed like water was not getting to the top rack. And indeed if I opened it up while it was running I could try to get an idea of what was going on. Opening stops the water but you can catch a glimpse of where the water is being sprayed. When I did this, I could observe that there was little or no water being sprayed out of the top sprayer arm. So now I had the beginnings of a theory of what was wrong, and I could begin testing hypotheses about this to determine how to fix it. What’s more, this hypothesis testing also helped to enrich my understanding of how the dishwasher actually worked.

Like any good scientist, I consulted the literature. In this case, YouTube and do-it-yourself websites. According to the literature, several things can affect the ability of the water to circulate. The pump is one of them. The pump helps to fill the unit with water and also to push the water around the unit at high enough velocity to wash the dishes. So if the pump was not operating correctly, the water would not be able to be pushed around and would not clean the dishes. But that’s not easy to service and also, if the pump were malfunctioning, it would not be filling or draining at all. So I reasoned that it must be something else.

There are other mechanisms and operations that could be failing and therefore restricting the water flow within the dishwasher. And the most probable cause was that something was clogging the filter that is supposed to catch particles from entering the pump or drain. It turns out that there’s a small wire screen underneath some of the sprayer arms. And attached to that is a small chopping blade that can chop and macerate food particles to ensure that they don’t clog the screen. But after a while, small particles can still build up around it and stop it from spinning, which stops the blades from chopping, which lets more food particles build up, which eventually restricts the flow of water, which means there’s not enough pressure to force water to the top level, which means there’s not enough water cleaning the dishes on the top, which leads the dishwasher to fail. Which is exactly what I had been observing. I was able to clean and service the chopper blade and screen and even installed a replacement. Knowing how the dishwasher works allowed me to keep a closer eye on that part, cleaning it more often. Knowing how the dishwasher worked gave me some insight into how to get cleaner dishes. Knowledge, in this case, was a powerful thing.

Trying to study what you can’t see

And that’s the point that I’m trying to make with the dishwasher metaphor.  We don’t necessarily need to understand how it works to know that it’s doing its job. We don’t need to understand how it works to use it. And it’s not easy to figure it out, since we can’t observe the internal state. But knowing how it works, and reading about how others have figured out how it works, can give you an insight into how the the processes work. And knowing how the processes work can give you and insight into how you might improve the operation, how you can avoid getting dirty dishes.

Levels of Dishwasher Analysis

This is just one example, of course and just a metaphor, but it illustrates how we can study something we can’t quite see. Sometimes knowing how something works can help in the operation and the use of that thing. More importantly, this metaphor can help to explain another theory of how we explain and study something. I am going to use this metaphor in a slightly different way and then we’ll put the metaphor away. Just like we put away the clean dishes. They are there in the cupboard, still retaining the effects of the cleaning process, ready to be brought back out again and used: a memory of the cleaning process.

Three ways to explain things

I think we can agree that there are different ways to clean dishes, different kinds of dishwashers, and different steps that you can take when washing the dishes. For washing dishes, I would argue that we have three different levels that we can use to explain and study things. First there is a basic function of what we want to accomplish, the function of cleaning dishes. This is abstract and does not specify who or how it happens, just that it does. And because it’s a function, we can think about it as almost computational in nature. We don’t even need to have physical dishes to understand this function, just that we are taking some input (the dirty dishes) and specifying an output (clean dishes). Then there is a less abstract level that specifies a process for how to achieve the abstract function. For example, a dishwashing process should first rinses off food, use detergent to remove grease and oils, rinse off the detergent, and then maybe dry the dishes. This is a specific series of steps that will accomplish the computation above. It’s not the only possible aeries of steps, but it’s one that works. And because this is like a recipe, we can call it an algorithm. When you follow these steps, you will obtain the desired results. There is also an even more specific level. We can imagine that there are many ways to build a system to carry out these steps in the algorithm so that they produce the desired computation. My Whirlpool dishwasher is one way to implement these steps. But another model of dishwasher might carry them out in a slightly different way. And the same steps could also be carried out by a completely different system (like on of my kids washing dishes by hand, for example). The function is the same (dirty dishes –> clean dishes) and the steps are the same (rinse, wash, rinse again, dry) but the steps are implemented by different system (one mechanical and the other biological). One simple task but there are three ways to understand and explain it.

David Marr and Levels of Analysis

My dishwasher metaphor is pretty simple and kind of silly. But there are theorists who have discussed more seriously the different ways to know and explain psychology. Our behaviour is one, observable aspect of this picture. Just as the dishwasher makes clean dishes, we behave to make things happen in our world. That’s a function. And just like the dishwasher, there are more that one way to carry out a function, and there are also more one way to build a system to carry out the function. The late and brilliant vision scientist David Marr argued that when trying to understand behaviour, the mind, and the brain, scientists can design explanations and theories at three levels. We refer to these as Marr’s Levels of Analysis (Marr, 1982). Marr worked on understanding vision. And vision is something that, like the dishwasher, can be studied at three different levels.

Untitled

Marr described the Computational Level as an abstract level of analysis that examines the actual function of the process. We can study what vision does (like enabling navigation, identifying objects, even extracting regularly occurring features from the world) at this level and this might not need to be as concerned with the actual steps or biology of vision. But at Marr’s Algorithmic Level, we look to identify the steps in the process. For example, if we want to study how objects are identified visually, we specify the initial extraction of edges, the way the edges and contours are combined, and the how these visual inputs to the system are related to knowledge. At this level, just as in the dishwasher metaphor, we are looking at species of steps but have not specified how those steps might be implemented. That examination would be done at the Implementation Level where we would study the visual system’s biological workings. And just like with the dishwasher metaphor, the same steps can be implemented by different systems (biological vision vs computer visions, for example). Marr’s theory about how we explain things has been very influential in my thinking and in psychology in general. It gives us a way to know about something and study somethings at different levels of abstraction and this can lead to insights about biology, cognitions, and behaviour.

And so it is with the study of cognitive psychology. Knowing something about how your mind works, how your brain works, and how the brain and mind interact with the environment to generate behaviours can help you make better decisions and solve problems more effectively. Knowing something about how the brain and mind work can help you understand why some things are easy to remember and others are difficult. In short, if you want to understand why people—and you—behave a certain why, you need to understand how they think. And if you want to understand how people think, you need to understand the basic principles of cognitive psychology, cognitive science, and cognitive neuroscience.

Reference

Marr, D. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information (WH Freeman, San Fransisco, 1982).

The Language of Sexual Violence

GettyImages_1043787558.0

Women’s March leaders address a rally against the confirmation of Supreme Court nominee Judge Brett Kavanaugh in front of the court building on September 24.
 Chip Somodevilla/Getty Images

The language we use to describe something can provide insights into how we think about it. For example, we all reserve words for close family members (“Mama” or “Papa”) that have special meaning and these words are often constrained by culture. And as elements of culture, there are times when the linguistic conventions can tell us something very deep about how our society think about events.

Current Events

This week (late September 2018) has been a traumatic and dramatic one. A Supreme Court nominee, Brett Kavanaugh was accused of an attempted rape 35 years ago. Both he and the accuser, Christine Blasey Ford were interviewed at a Senate hearing. And much has been written and observed about they ways they spoke and communicated during this hearing. At the same time, many women took to social media to describe their own experiences with sexual violence. I have neither academic expertise nor personal experience with sexual violence. But like many, I’ve followed these events with shock and with heartbreak.

Survivors

I’ve noticed something this week about how women who have been victims of sexual violence talk about themselves and the persons who carried out the assault. First of all, many women identify as survivors and not victims. A victim is someone who had something happen to them. A survivor is someone who has been able to overcome (or is working to overcome) those bad things. I don’t know if this is a conscious decision or not, though it could be. It is an effective way for a woman who had been a victim to show that they are a survivor. I think that many women use this term intentionally to show that they have survived something.

Part of The Self

But there is another linguistic construction that is even more interesting. I’ve noticed, especially in the news and on social media, that women say or write  my rapist” or “my abuser”,  or “my assailant”.  I don’t believe this is intentional or affected. I think this is part of the language because it’s part of how the person thinks about the event. Or maybe part of how society thinks about the event. The language suggests that women have internalized the identity of the perpetrator and that the event and the abuser has also become part of who they are as women.  It’s deep and consequential in ways that few other events are.

Of course a sexual assault would be expected to be traumatic and even life changing, but I’m struck by how this is expressed in the idioms and linguistic conventions women use to describe the event. Their language suggests some personal ownership. It’s more than a memory for an event or an episode. It’s a memory for person, a traumatic personal event, and also knowledge of the self. Autonoetic memory is deeply ingrained. It is “Indelible in the hippocampus

All of us talk this way sometimes, of course. If you say “this cat” it’s different from saying “my cat”. The former is an abstraction or general conceptual knowledge. The latter is your pet. It’s part of your identity. “My mother”, “my car”, “my smartphone” are more personal but still somewhat general. But “my heart”,  my child ‘, “my body” , and “my breath” are deeply personal and these things are just part of who we are.

Women don’t use this construction when talking about non sexual violence. They might say “the person who cut me off” or “the guy who robbed me” . Similarly, men who have been assaulted don’t use this language . They say “the man who assaulted me. “ or “the guy who punched me”, or even “the priest who abused me” . And men do not use this language to refer to people that have assaulted (e.g. “my victim“). You might occasional hear or read men refer to “my enemy or “my rival” which, I think, has the same deeper, more profound meaning as the terms used by women for sexual violence but not as traumatic. So by and large this seems to be something that women say about sexual violence specifically.

Deep and Personal Memory

So when a woman, says “my rapist“ it suggests a deep and personal knowledge.  Knowledge that has and will stay with them, affect their lives, and affect how they think about the event and themselves. Eyewitness memory is unreliable. Memory for facts and events—even personal ones—are malleable. But you don’t forget who someone is. You don’t forget the sound of your sibling’s voice. You don’t forget sight of your children. You don’t forget your address. You don’t forget your enemy…and you would not forget your abuser or your rapist.

The Cognitive Science Age

namib

Complex patterns in the Namib desert resemble neural networks.

The history of science and technology is often delineated by paradigm shifts. A paradigm shift is a fundamental change in how we view the world and our relationship with it. The big paradigm shifts are sometimes even referred to as an “age” or a “revolution”. The Space Age is a perfect example. The middle of the 20th Century saw not only an incredible increase in public awareness of space and space travel, but many of the industrial and technical advances that we now take for granted were byproducts of the Space Age. 

The Cognitive Science Age

It’s probably cliche to write this but I believe we are at the beginning of a new age, and a new, profound paradigm shift. I think we’re well into the Cognitive Science Age. I’m not sure anyone calls it that, but I think that is what truly defines the current era. And I also think that an understanding of Cognitive Science is essential for understanding our relationships with the world and with each other. 

I say this because in the 21st century, artificial intelligence, machine learning, and deep learning are now being fully realized. Every day, computers are solving problems, making decisions, and making accurate predictions about the future…about our future. Algorithms decide our behaviours in more ways that we realize. We look forward to autonomous vehicles that will depend of the simultaneous operation of many computers and algorithms. Machines will (and have) become central to almost everything.

And this is a product of Cognitive Science. As cognitive scientists, this new age is our idea, our modern Prometheus.

Cognitive Science 

Cognitive Science is an interdisciplinary field that first emerged in the 1950s and 1960s and sought to study cognition, or information processing, as its own area of study rather than as a strictly human psychological concept. As a new field, it drew from Cognitive Psychology, Philosphy, Linguistics, Economics, Computer Science, Neuroscience, and Anthropology. Although people still tend to work and train in those more established traditional fields, it seems to me that society as a whole is in debt to the interdisciplinary nature of Cognitive Science. And although it is a very diverse field, the most important aspect in my view is the connection between biology, computation, and behaviour.

The Influence of Biology

A dominant force in modern life is the algorithm, as computational engine to process information and make predictions. Learning algorithms take in information, learn to make associations, make predictions from those associations, and then adapt and change. This is referred to as machine learning, but the key here is that machines learn biologically,

For example, the algorithm (Hebbian Learning) that inspires machine learning was discovered by the psychologist and neuroscientist Donald Hebb at McGill university. Hebb’s book on the The Organization of Behaviour  in 1949 is one of the most important books written in this field and explained how neurons learn associations. This concept was refined mathematically by the Cognitive Scientists Marvin Minsky, David Rumlehart, James McLelland, Geoff Hinton, and many others. The advances we see now in machine learning and deep learning are a result of Cognitive Scientists learning how to adapt and build computer algorithms to match algorithms already seen in neurobiology. This is a critical point: It’s not just that computers can learn, but that the learning and adaptability of these systems is grounded in an understanding of neuroscience. That’s the advantage of an interdisciplinary approach.

The Influence of Behaviour 

As another example, the theoretical grounding for the AI revolution was developed by Allen Newell (a computer scientist) and Herbert Simon (an economist). Their work in the 1950s-1970 to understand human decision making and problem solving and how to model it mathematically is provided a computational approach that was grounded in an understanding of human behaviour. Again, this an advantage of the interdisciplinary approach afforded by Cognitive Science. 

The Influence of Algorithms on our Society 

Perhaps one of the most salient and immediately present ways to see the influence of Cognitive Science is in the algorithms that drive the many products that we use online. Google is many things, but at its heart, it is a search algorithm and a way to organize the knowledge in the world so that the information that a user needs can be found. The basic ideas of knowledge representation that underlie Google’s categorization of knowledge were explored early on by Cognitive Scientists like Eleanor Rosch and John Anderson in the 1970s and 1980s. 

Or consider Facebook. The company runs and designs a sophisticated algorithm that learns about what you value and makes suggestions about what you want to see more of. Or, maybe more accurately, it makes suggestions for what the algorithm predicts will help you to expand your Facebook network… predictions for what will make you use Facebook more. 

In both of these cases, Google and Facebook, the algorithms are learning to connect the information that they acquire from the user, from you, with the existing knowledge in the system to make predictions that are useful and adaptive for the users, so that the users will provide more information to the system, so that it can refine its algorithm and acquire more information, and so on. As the network grows, it seeks to become more adaptive, more effective, and more knowledgeable. This is what your brain does, too. It causes you to engage in behaviour that seeks information to refine its ability to predict and adapts. 

These networks and algorithms are societal minds; They serve the same role for society that our own network of neurons serves our body. Indeed, these algorithms can even  change society. This is something that some people fear. 

Are Fears of the Future Well Founded?

When tech CEOs and politicians worry about the dangers of AI, I think that idea is at the core of their worry. The idea that the algorithms to which we entrust increasingly more of our decision making are altering our behaviour to serve the algorithm in the same way that our brain alters our behaviour to serve our own minds and body is somethings that strikes many as unsettling and unstoppable. I think these fears are founded and unavoidable, but like any new age or paradigm shift, we should continue to approach and understand this from scientific and humanist directions. 

The Legacy of Cognitive Science

The breakthroughs of the 20th and 21st centuries arose as a result of exploring learning algorithms in biology, the instantiation of those algorithms in increasingly more powerful computers, and the relationship of both of these concepts to behaviour. The technological improvements in computing and neuroscience have enabled these ideas to become a dominant force in the modern world. Fear of a future dominated by non-human algorithms and intelligence may be unavoidable at times but and understanding of Cognitive Science is crucial to being able to survive and adapt.

 

Cognitive Bias and the Gun Debate

171017-waldman-2nd-amendment-tease_yyhvy6

image from GETTY

I teach a course at my Canadian university on the Psychology of Thinking and in this course, we discuss topics like concept formation, decision making, and reasoning. Many of these topics lend themselves naturally to the discussion of current topics and in one class last year, after a recent mass shooting in the US, I posed the following question:

“How many of you think that the US is a dangerous place to visit?”

About 80% of the students raised their hands. This is surprising to me because although I live and work in Canada and I’m a Canadian citizen, I grew up in the US; my family still lives there and I still think it’s a reasonably safe place to visit. Most students justified their answer by referring to school shootings, gun violence, and problems with American police. Importantly, none of these students had ever actually encountered violence in the US. They were thinking about it because it has been in the news. That were making a judgment on the basis of the available evidence about the likelihood of violence.

Cognitive Bias

The example above is an example of a cognitive bias known as the Availability Heuristic. The idea, originally proposed in the early 1970s by Daniel Kahneman and Amos Tversky (Kahneman & Tversky, 1979; Tversky & Kahneman, 1974) is that people generally make judgments and decisions on the basis of the most relevant memories that they retrieve and that are available at the time that the assessment or judgement is made. In other words, when you make a judgment about a likelihood of occurrence, you search your memory and make your decision on the basis of what you remember. Most of the time, this heuristic produces useful and correct evidence. But in other cases, the available evidence may not correspond exactly to evidence in the world. For example, we typically overestimate the likelihood of shark attacks, airline accident, lottery winning, and gun violence.

Another cognitive bias (also from Kahneman and Tversky) is known as the Representativeness Heuristic. This is the general tendency to treat individuals as representative of their entire category. For example, suppose I formed concept of American gun owners as being violent (based on what I’ve read or seen in the news), I might infer that each individual American is a violent gun owner. I’d be making a generalization or a stereotype and this can lead to bias in how a treat people. As with availability, the representativeness heuristic arrises out of the natural tendency of humans to generalize information. Most of the time, this heuristic produces useful and correct evidence. But in other cases, the representative evidence may not correspond exactly to individual evidences in the world.

The Gun Debate in the US

I’ve been thinking about this a great deal as the US engages in their ongoing debate about gun violence and gun control. It’s been reported widely that the US has the highest rate of private gun ownership in the world, and also has an extraordinary rate of gun violence relative to other counties. These are facts. Of course, we all know that “correlation does not equal causation” but many strong correlations often do derive from a causal link. The most reasonable thing to do would be to begin to implement legislation that restricts access to firearms but this never happens and people are very passionate about the need to restrict guns.

So why to do we continue to argue about this? One problem that I rarely see being discussed is that many of us have limited experience with guns and/or violence and have to rely on what we know from memory and from external source and we’re susceptible to cognitive biases.

Let’s look at things from the perspective of an average American gun owner. This might be you, people you know, family, etc. Most of these gun owners are very responsible, knowledgeable, and careful. They own firearms for sport and also for personal protection and in some cases, even run successful training courses for people to learn about gun safety. From the perspective of a responsible and passionate gun owner, it seems to be quite true that the problem is not guns per se but the bad people who use them to kill others. After all, if you are safe with your guns and all your friends and family are safe, law abiding gun owners too, then those examples will be the most available evidence for you to use in a decision. And so you base your judgements about gun violence on the this available evidence and decide that gun owners are safe. As a consequence, gun violence is not a problem of guns and their owners, but must be a problem of criminals with bad intentions. Forming this generalization is an example of the availability heuristic. It my not be entirely wrong,  but it is a result of a cognitive bias.

But many people (and me also) are not gun owners. I do not own a gun but I feel safe at home. As violent crime rates decrease, the likelihood being a victim of a personal crime that a gun could prohibit is very small, Most people will never find themselves in this situation. In addition, my personal freedoms are not infringed by gun regulation and I too recognize that illegal guns are a problem. If I generalize from my experience, I may have difficulty understanding why people would need a gun in the first place whether for personal protection or for a vaguely defined “protection from tyranny”. From my perspective it’s far more sensible to focus on reducing the number of guns. After all, I don’t have one, I don’t believe I need one, so I generalize to assume that anyone who owns firearms might be suspect or irrationally fearful. Forming this generalization is also an example of the availability heuristic. It my not be entirely wrong,  but it is a result of a cognitive bias.

In each case, we are relying on cognitive biases to infer things about others and about guns. These things and inferences may be stifling the debate

How do we overcome this?

It’s not easy to overcome a bias, because these cognitive heuristics are deeply engrained and indeed arise as a necessary function of how the mind operates. They are adaptive and useful. But occasionally we need to override a bias.

Here are some proposals, but each involves taking the perspective of someone on the other side of this debate.

  1. Those of us on the left of the debate (liberals, proponents of gun regulations) should try to recognize that nearly all gun enthusiasts are safe, law abiding people who are responsible with their guns. Seen through their eyes, the problem lies with irresponsible gun owners. What’s more, the desire to place restrictions on their legally owned guns activates another cognitive bias known as the endowment effect in which people place high value on something that they already possess, the prospect of losing this is seen as aversive because it increases the feeling of uncertainty for the future.
  2. Those on the right (gun owners and enthusiasts) should consider the debate from the perspective of non gun owners and consider that proposals to regulate firearms are not attempts to seize or ban guns but rather attempts to address one aspect of the problem: the sheer number of guns in the US, any of which could potentially be used for illegal purposes. We’re not trying to ban guns, but rather to regulate them and encourage greater responsibility in their use.

I think these things are important to deal with. The US really does have a problem with gun violence. It’s disproportionally high. Solutions to this problem must recognize the reality of the large number of guns, the perspectives of non gun owners, and the perspectives of gun owners. We’re only going to do this by first recognizing these cognitive biases and them attempting to overcome them in ways that search for common ground. By recognizing this, and maybe stepping back just a bit, we can begin to have a more productive conversation.

As always: comments are welcome.

The fluidity of thought

Knowing something about the basic functional architecture of the brain is helpful in understanding the organization of the mind and in understanding how we think and behave. But when we talk about the brain, it’s nearly impossible to do so without using conceptual metaphors (when we talk about most things, it’s impossible to do so without metaphors). 

Conceptual metaphor theory is a broad theory of language and thinking from the extraordinary linguist George Lakoff. One of the basic ideas is that we think about things and organize the world into concepts in ways that correspond to how we talk about them. It’s not just that language directs thought (that’s Whorf’s idea), but that these two things are linked and our language also provides a window into how we think about things. 

Probably the most common metaphor for the brain is the “brain is a computer” metaphor, but there are other, older ideas.

The hydraulic brain

One interesting metaphor for brain and mind is the hydraulic metaphor. This probably goes back at least to Descartes (and probably earlier), who advocated a model of neural function whereby basic functions were governed by a series of tubes carrying “spirits” or vital fluids. In Descartes model, higher order thinking was handled by a separate mind that was not quite in the body. You might laugh at the ideas of brain tubes, but this idea seems quite reasonable as a theory from an era when bodily fluids were the most obvious indicators of health, sickness, and simply being alive: blood, discharge, urine, pus, bile, and other fluids are all indicators of things either working well or not working well. And when they stop, you stop. In Descartes time, these were the primary ways to understand the human body. So in the absence of other information about how thoughts and cognition occur it makes sense that early philosophers and physiologists would make an initial guess that thoughts in the brain are also a function of fluids.

Metaphors for thinking

This idea, no longer endorsed, lives on in our language in the conceptual metaphors we use to talk about the brain and mind. We often talk about cognition and thinking as information “flowing” as in the same way that fluid might flow. We have common expressions in English like the “stream of consciousness” or “waves of anxiety”, “deep thinking”, “shallow thinking”, ideas that “come to the surface”, and memories that come “flooding back” when you encounter an old friend. These all have their roots (“roots” is another conceptual metaphor of a different kind!) in the older idea that thinking and brain function are controlled by the flow of fluids through the tubes in the brain.

In the modern era, it sis still common to discuss neural activation as a “flow of information”. We might say that information “flows downstream”, or that there is a “cascade” of neural activity. Of course we don’t really mean that neural activation and cognition are flowing like water, but like so many metaphors it’s just impossible to describe things without using these expressions and in doing so, activating the common, conceptual metaphor that thinking is a fluid process.

There are other metaphors as well (like the electricity metaphor, behaviours being “hard wired”, getting “wires crossed”, an idea that “lights up”) but I think the hydraulic metaphor is my favourite because it captures the idea that cognition is fluid. We can dip our toes in the stream or hold back floods. And as you can seen from earlier posts, I have something of a soft spot for river metaphors.

 

 

A Curated Reading List

Fact: I do not read enough of the literature any more. I don’t really read anything. I read manuscripts that I am reviewing, but that’s not really sufficient to stay abreast of the field. I assign readings for classes, to grad students, and trainees and we may discuss current trends. This is great for lab, but for me the effect is something like me saying to my lab “read this and tell me what happened”. And I read twitter.

But I always have a list of things I want to read. What better way to work through these papers than to blog about them, right?

So this the first instalment of “Paul’s Curated Reading List”. I’m going to focus on cognitive science approaches to categorization and classification behaviour. That is my primary field, and the one I most want to stay abreast of. In each instalment, I’ll pick a paper that was published in the last few months, a preprint, or a classic. I’ll read it, summarize it, and critique. I’m not looking to go after anyone or promote anyone. I just want to stay up to date. I’ll have a new instalment on a regular basis (once every other week, once a month, etc.). I’m doing this for me.

So without further introduction, here is Reading List Item #1…

Smith, J. D., Jamani, S., Boomer, J., & Church, B. A. (2018). One-back reinforcement dissociates implicit-procedural and explicit-declarative category learning. Memory & Cognition,46(2), 261–273.

Background

This paper was published on line last fall but was just published officially in Feb of 2018. I came across it this morning and I was looking at the “Table of Contents” email from Memory & Cognition. Full disclosure, the first author was my grad advisor from 1995-2000, though we have’t collaborated since then (save for a chapter). He’s now at Georgia State and has done a lot of fascinating work on metacognition in non-human primates.

The article describes a single study on classification/category learning. The authors are working within a multiple systems approach of category learning. According to this framework, a verbally-mediated, explicit system learns categories by trying to abstract use a rule, and a procedurally-mediated, implicit system learns categories by Stimulus Response (S-R) association. Both systems have well-specified neural underpinnings. These two systems work together but sometimes they are in competition. I know this theory well and have published quite of few papers on the topic. So of course, I wanted to read this one.

A common paradigm in this field is to introduce a manipulation that is predicted to impair or enhance one of the systems and leave the other unharmed in order to create a behavioural dissociation. The interference in this paper was a 1-back feedback manipulation. In one condition, participants received feedback right after their decision and in another, they received feedback about their decision on the previous trial. Smith et al. reasoned that the feedback delay would disrupt the S-R learning mechanism of the procedural/implicit system, because it would interfere with the temporal congruity stimulus and response. It should have less of an effect on the explicit system, since learners can use working memory to verbalize the rule they used and the response they made.

Methods

In the experiment, Smith et al. taught people to classify a large set (480) of visual stimuli that varied along two perceptual dimensions into two categories. You get 480 trials, and on each trial you see shape, make a decision, get feedback, and see another shape, and so on. The stimuli themselves are rectangles that vary in terms of size (dimension 1) and pixel density (dimension two). The figure below shows examples of range. There was no fixed set of exemplars, but “each participant received his or her own sample of randomly selected category exemplars appropriate to the assigned task”.

1F890860-F5FB-4D01-81DD-CFB2D765A31B

They used a 2 by 2 design with two between-subject factors. The first factor was category set. Participants learned either a rule based category (RB) in which a single dimension (size or density) creates an easily-verbalized rule, or an information integration category (II) in which both dimensions need to be integrated at a pre decisional stage. This II category can’t be learned very easily by a verbal rule and many studies have suggested it’s being learned by the procedural system. The figure below shows how many hundreds of individual exemplars would be divided into two categories for each of the each of the category sets (RB and II).

3B0F0A79-A9D8-4CA4-A68E-4D350908BA90

The second factor was feedback. After each decision, you either received feedback right after you made a decisions (0Back) or you received feedback one trial later (1Back). This creates a heavier task demand, so it should make it harder to learn the RB categories at first because the task creates a heavier working memory load. But it should interfere with II learning by the procedural system because the 1-Back disturbs the S-R association.

Results

So what did they find? The learning data are plotted below and suggest that the 1Back feedback made it harder to learn the RB categories at first, and seemed to hurt the II categories at the end. The 3-way ANOVA (Category X Feedback X Block) provided evidence to that effect, but it’s not an overwhelming effect. Smith et al.’s decision to focus a follow up analysis on the final block was not very convincing. Essentially, they compared means and 95% CIs for the final block for each of the four cells and found that performance in the two RB conditions did not differ, but performance in the two II conditions did. Does that mean that the feedback was disrupting the feedback? I’m not sure. Maybe participants in that condition (II-1Back) were just getting weary of a very demanding task. A visual inspection of the data seems to support that alternative conclusion as well.  Exploring the linear trends might have been a stronger approach.

1C1FA11F-E898-435A-845B-B890030342CB

The second analysis was a bit more convincing. They fit each subject’s data with a rule model and an II model. Each model tried to account for each subject’s final 100 trials. This is pretty easy to do and you are just looking to see which model provides the most likely account of the data. You can then plot the best fitting model. For subjects who learned the RB category, the optimal rule should be the vertical partition and for the II category, the optimal model is the diagonal partition.

As seen the figure below, the feedback did not change the strategy very much for subjects who learned the RB categories. Panel (a) and (b) show that the best-fitting model was usually a rule based one (the vertical partition). The story is different for subjects learning II categories. First, there is way more variation in best fitting model. Second, very few subjects in the 1-back condition (d) show evidence of the using the optimal rule (the diagonal partition).

39925827-64C8-40ED-923F-2309265CD7FB

Conclusions

Smith et al concluded: “We predicted that 1-Back reinforcement would disable associative, reinforcement-driven learning and the II category-learning processes that depend on it. This disabling seems to have been complete” But that’s a strong conclusion. Too strong. Based on the modelling, the more measured conclusion seems to be that about 7-8 of the 30 subjects in the II-0Back condition learned the optimal rule (the diagonal) compared to about 1 subject in the II-1Back. Maybe a just handful of keener’s ended up in the II-0Back and learned the complex structure? It’s not easy to say. There is some evidence in favour of Smith et al’s conclusion but its not at all clear.

I still enjoyed reading the paper. The task design is clever, and the predictions flow logically from the theory (which is very important).It’s incremental work. It adds to the literature on the multiple systems theory but does not (in my opinion) rule out a single-system approach. But I wish they had done a second study as an internal replication to explore the stability of the result. Or maybe a second study with the same category structure but different stimuli. It’s incremental work. It adds to the literature on the multiple systems theory but does not (in my opinion) rule out a single-system approach.

Tune in a few weeks for the next instalment. Follow my blog if you like and check the tag to see the full list. As the list grows, I may create a better structure for these, too.

 

The one “productivity hack” that you probably avoid like the plague

A few weeks ago, my office phone rang. It rarely does, and even when it does ring, I rarely answer it. I usually let the call go to voicemail. My outgoing voicemail actually says: “I never check voicemail, please email me at…“. And even if someone does leave a voicemail, it’s transcribed by the university email systems and sent to my email.

This is so inefficient, and self-sabotaging, given that I, like most academics, moan about how much email I have to process.

But this time, noticing that it was another professor in my department who was calling, I picked it up. My colleague who was calling is the co-ordinator for the Psychology Honours program and he had a simple question about the project that one of my undergraduate Honours students was working on. We solved the issue in about 45 seconds.

If I had followed my standard protocol, he would have left a voicemail or emailed me (or both). It would have probably taken me a day to respond, and the email would have taken 5-8 minutes for me to write. He’d have then replied (a day later), and if my email was not clear, there might have been another email. Picking up the phone saved each of us time and effort and allowed my student’s proposed project to proceed.

Phone Aversion

Why are we so averse to using the phone?

I understand why in principal: it’s intrusive, it takes you out of what you were doing (answering email, probably), and you have to switch tasks.  The act of having to disengage from what you are doing and manage a call is a cognitively-demanding action.  After the call, you then have to switch back. So it’s natural to make a prospective judgement to avoid taking the call.

And from the perspective of the caller, you might call and not get an answer, then you have to engage a new decision-making process: Should I leave a message, call again, or just email. This cognitive switching takes time and effort. And of course, as many of us resent being interrupted by a call, we may also assume that the person we are calling also resents the interruption and so we avoid calling out of politeness (maybe this more of a Canadian thing…)

So there are legitimate, cognitive and social/cognitive reasons to avoid using the phone.

We Should Make and Take More Calls

My experience was a small revelation, though. Mostly because after the call, while I was switching back to what I had been doing prior, I thought about how much longer (days) the standard email approach would have taken. So I decided that, going forward, that I’m going to try to make and take more calls. It can be a personal experiment.

I tried this approach a few days ago with some non-university contacts (for the youth sports league I help to manage). We saved time and effort. Yes, it might have taken a few minutes out of each other’s day, but it paled in comparison to what an email-based approach would have taken.

For Further Study

Although I’m running a “personal experiment” on phone call efficiency, I’d kind of like to study this in more detail. Perhaps design a series of experiments in which two (or more) people are given a complex problem to solve and we can manipulate how much time they can spend on email vs time on the phone. We’d track things like cognitive interference. I’m not exactly sure how to do this, but I’d like to look at it more systematically. The key things would be how effectively people solve the problems, and if and how one mode of communication interferes with other tasks.

Final Thoughts

Do you prefer email or a phone call? Have you ever solved a problem faster on the phone vs email? Have you ever found the reverse to be true?

Or do you prefer messaging (Slack, Google Chat, etc.) which is more dynamic than email but not as intrusive as a phone call?

 

A Computer Science Approach to Linguistic Archeology and Forensic Science

Last week (Sept 2014),  I heard a story on NPR’s morning edition that really got me thinking…(side note, I’m in Ontario so there is no NPR but my favourite station is WKSU via TuneIn radio on my smart phone). It was a short story, but I thought it was one the most interesting I’ve heard in last few months, and it got me thinking about how computer science has been used to understand natural language cognition.

Linguistic Archeology

Here is a link to the actual story (with transcript). MIT computer scientist Boris Katz realized that when people learn English as second language, they make certain errors that are a function of their native language (e.g. native Russian speakers leave out articles in English). This is not a novel finding, people have known this. Katz, by the way, is one of many scientists that worked with Watson, the IBM computer that competed on jeopardy

Katz trained a computer model to learn from samples of English text productions such that it could detect the writer’s native language based on errors in their written English text. But the model also learned to determine similarities among other native languages. The model discovered, based on errors in English, that Polish and Russian have historical overlap. In short, the model was able to determine the well know linguistic family tree among many natural languages.

The next step is to use the model to uncover new things about dying or languages. As Katz says

But if those dying languages have left traces in the brains of some of those speakers and those traces show up in the mistakes those speakers make when they’re speaking and writing in English, we can use the errors to learn something about those disappearing languages.”

Computational Linguistic Forensics

This is only one example. Another one that fascinated me was the work of Ian Lancashire, an English professor at the University of Toronto and Graeme Hirst, a professor in the computer science department. The noticed that the output of Agatha Christie—she wrote around 80 novels, and many short stories— declined in quality in her later years. That itself is not surprising, but they thought there was a pattern. After digitizing her work, they analyzed the technical quality of her output and found richness of her vocabulary fell by one-fifth between the earliest two works and the final two works. That, and other patterns, are more consistent with Alzheimer’s than normal aging. In short, they are tentatively diagnosing Christie with Alzheimer disease, based on her written work. You can read a summary HERE and you can read the actual paper HERE.  It’s really cool work.

Text Analysis at Large

I think this work is really fascinating and exciting. It highlights just how much can be understood via text analysis. Some of the this is already commonplace. We educators rely on software to detect plagiarism. Facebook and Google are using these tools as well. One assumes that the NSA might be able to rely on many of these same ideas to infer and predict information and characteristics about the author of some set of written statements. And if a computer can detect a person’s linguistic origin from English textual errors, I’d imagine it can be trained to mimic the same effects and produce English that looks like  it was written by a native speaker of another language…but was not. That’s slightly unnerving…

Music and the Mind

As I am sitting down to write this blog entry, my younger daughter is practicing her piano lessons for the week. She will put in twenty minutes of practice, paying extra attention to counting (her teacher really likes her students to count). In the short term, she will progress to being able to play more complicated pieces, to play music (rather than just notes) and our living room will be filled with the sounds of elementary piano music.

Hearing our children play music is an undeniably wonderful thing.

But in the long term, there is increasing evidence that the time she spends on music instruction may have long lasting and beneficial effects on cognitive function, social behavior, and academic performance. That seems to be the conclusion of much of the contemporary research on the effects of music on the brain and mind.

Full disclosure, although I study cognition and thinking, this is not my area of expertise. I’m interested as a psychologist, but also as a parent and music lover. So I’m not endorsing anything in my professional capacity, I just find this work really fascinating.

The study music and the mind had a dubious moment of fame in the 1990s, and everyone has heard of the “mozart effect”. The idea, which was wildly over interpreted by many, was that listening to music (specifically to the music of Mozart) will “make you smarter”. Of course, the original paper did not make this claim, and the authors were clear that these were short term effects of listening to a piece of music and subsequent  performance on spatial reasoning tasks. But  the public was so enamored by this finding that a whole industry was spawned (“Baby Einstein” DVDs) and the governor of the state of Georgia actually set aside money to make sure that every baby that was born in that state was given a classical music CD.

Although the idea that passive listening to classical music would make babies and kids more intelligent and more creative is erroneous, interest in music and the mind has not disappeared, and a few weeks ago, I came across several popular science articles that suggest a renewed interest in the topic. And this time, the claims are more credible and the possible benefits much more long lasting.

But what effects does music–either listening to, or playing–have on the mind?

There is robust evidence from Glenn Schellenberg’s lab at the University of Toronto that music instruction is directly linked to higher IQ scores. A paper from 2005 summarized this work, and found that music instruction was correlated with improvements in spatial, mathematical, and verbal tasks. He writes , “Does music make you smarter? The answer is a qualified yes.” The reasoning is that music instruction seems to have these effects because it is school-like, requires attention, is enjoyable, and engages many areas of the brain. Learning about music also requires and encourages abstract thought.  The suggestion here is that a person can identify the same tune even if played in a different tempo, instrument, or key because that they have processed it as an abstraction. The  “qualified yes” is that it is not clear if music lessons are the only way to get this improvement and Schellenberg suggests that other kinds of private lessons (drama, for example) might show similar cognitive  benefits.

But other research has begun to track the academic performance and brain function of students who engage in music instruction. A longitudinal study being run by Nina Kraus at Northwestern University is looking carefully at long term benefits of school-based music curricula (as opposed to private lessons as in the Schellenberg study). In essence, music instruction in school seems to improve children’s communication skills, attention, and memory. Kraus’s team is also examining the neural correlates to these benefits and even finds that the auditory processing advantages and neural changes that come from music instruction are robust into adulthood. In other words, if there are cognitive and perceptual enhancements from studying music as a child, these changes may persist long after music instruction is over.

Finally, a recent editorial in the New York Times asked “Is Music the Key to Success?” The author notes that many very successful people benefited from extensive music training.  Allen Greenspan, Stephen Spielberg, Larry Page, Paul Allen, Condoleezza Rice, and others were (and are) trained musicians. This is not to say that piano lessons at age 6 = future Secretary of State, and of course the Op Ed  asks “Will your school music program turn your kid into a Paul Allen, the billionaire co-founder of Microsoft (guitar)? Or a Woody Allen (clarinet)? Probably not.” But the correlations are there, and the evidence (including the more rigorous studies above) is compelling.

The message is: Learn to play music,  or have your children learn an instrument.

Obviously, there is no evidence that instruction in music will produce negative effects. None. So why do schools and school boards sometimes look cutting to music and arts programs as a way to make ends meet? Just this year, the Toronto school board decided (controversially) to make some severe cuts to its music program, and this problem is province wide (though thankfully, not our kids’ public elementary school…we have a great music program).  And this problem is not unique to Ontario, of course. California has seen its school music program decimated.

This is not a good idea.

My point is, there is ample evidence—even when viewed with a skeptical eye—that music instruction has tangible benefits and there is literally no downside. If anything, I’d argue for more music instruction in schools. We’ll likely see wide-ranging cognitive and academic benefits as a result. But if nothing else, we’ll maybe create more musicians.

Gladwell versus the academy (a modern David and Goliath)

I’ll start with an admission: I have never read any of Malcolm Gladwell’s books.

It’s nothing personal or principled, but I just never got around to it;  I tend to prefer reading fiction in my spare time anyway. I have enjoyed some of his essays in the New Yorker, but that’s about it. So I am not writing about the content of his books.  I’m writing about the reception that his book receive, the criticisms, and the apparent belief by many that he’s a scientist. This, it seems, really bothers some actual scientists.

Malcom Gladwell is an enormously successful and gifted writer. No one can argue with this. His books Blink, and The Tipping Point, and Outliers have have made accessible to many people outside the academic and scientific world an understanding of some of the most interesting and exciting ideas in cognition, social psychology, and neuroscience. He has a long career as a journalist, is well read, and he’s no Jonah Leher….

With each book, Gladwell’s stature has grown, but I have noticed the reaction from academics has been less than enthusiastic. Many feel that he misunderstands (or worse, misrepresents) the scientific studies upon which many of his books are built. Dan Simons and Chris Chabris are two of the more vocal critics, and they are both well-respected and well-known scientific psychologists. They argued (in an article posted in the Chronicle of Higher Education that many people were overly enthusiastic about the premises in Blink, namely that intuition can produce better outcomes than analytic cognition. It’s not that they necessarily thought the book was wrong so much that they felt everyone was misinterpreting what it was about. In fact, Simons and Chabris are the authors of The Invisible Gorilla: How Our Intuitions Deceive Us, which argues that human intuitions can be very deceptive. The title, by the way, refers to one of Simons’s most well-know experiments.

They are not the only vocal critics. Steven Pinker is probably closer to Malcolm Gladwell in terms of being a public intellectual (and he has received his fair share of criticism as well). And he too is critical of Gladwell’s books for some of the same reasons. In a review of Outliers,  Pinker writes that “The reasoning in “Outliers,” which consists of cherry-picked anecdotes, post-hoc sophistry and false dichotomies, had me gnawing on my Kindle.”

So now Malcolm Gladwell has a new book, David and Goliath.  As I mentioned before, I have not read this book, so I make no attempts to provide my own critique. But one anecdote in particular seems to have garnered a lot of attention. Gladwell discusses several stories of people who became very successful despite having dyslexia. His thesis seems to be that having dyslexia made it just a little harder for these people to get by, and so maybe they worked a little harder and compensated for the dyslexia and thus achieved greatness. Gladwell calls this  “the theory of desirable difficulty.” He bases this (apparently) on a study from 2007 in which subjects who read a mathematical reasoning problem in a hard-to-read typeface actually outperformed subjects who read the same problems in an easier to read typeface. So there may be a connection, but there may not be.

In a recent review in the WSJ, Christopher Chabris takes Gladwell to task. He points out that the 2007 study in question has not replicated that well. He wonders why Gladwell does not point this out. He wonders why Gladwell asserts as “laws” phenomena with many possible interpretations. The review is critical, and very good, and points out what I really think people should be aware of  when they read Gladwell’s book, namely that  it contains interesting anecdotes mixed with science, and that the writing is very good and persuasive. This need not be a bad thing, and Gladwell and his supportive critics point out that this is a great narrative form, and is exactly what makes Gladwell so good. Stories matter. Narrative matters. But the expanded version on Chabris’s blog went further, and Chabris worries that Gladwell knows full well that people over interpret his books and he simply does not care. He writes “I can certainly think of one gifted writer with a huge audience who doesn’t seem to care that much. I think the effect is the propagation of a lot of wrong beliefs among a vast audience of influential people. And that’s unfortunate.”

Ouch.

Is this envy? I do not think so. Dan Simons and Chabris are successful authors in their own right. So is Steven Pinker. But the difference is that they are also successful academics and researchers. Chabris makes the point that many people simply consider Gladwell to be an authority, rather than an author. The term “Gladwellian” exists.

The review was critical enough to cause Mr Gladwell to respond on Slate.com. Gladwell suggested that “Chabris should calm down”, and  he even takes a mild swipe at Mr. Chabris’ wife. Why so personal? I will confess, that I did not find Gladwell’s Slate response to be very flattering. It came across as arrogant and dismissive. Does Gladwell imagine himself as the David and the Academy as the Goliath? Possibly, though I’m inclined to think the opposite. Gladwell’s “brand” is so big that he is very likely the Goliath in this this fight. And (in keeping with the these of his new book)  his gifts–his incredible writing talent– may very well be what could bring him down.

In the end, I’m glad that this debate is even able to happen. I’m glad that there is a journalist and writer like Malcolm Gladwell  who is interested and exited enough by human behavior and psychology to write best sellers. I’m glad that there are serious and respected scientist like Chabris and Simons to call him out when the claims go to far.

In the course of following these criticisms and counter criticisms  I’ve become much more interested in reading this work. I fully plan to read Gladwell’s book of Essays (What The Dog Saw)  and some of his books. As well, I’m planning to read Simons and Chabris book too. All concerned parties can rest assured  that I’ll be checking them out of my public library soon, and that no actual cash will flow.