Category Archives: cognitive science

The fluidity of thought

Knowing something about the basic functional architecture of the brain is helpful in understanding the organization of the mind and in understanding how we think and behave. But when we talk about the brain, it’s nearly impossible to do so without using conceptual metaphors (when we talk about most things, it’s impossible to do so without metaphors). 

Conceptual metaphor theory is a broad theory of language and thinking from the extraordinary linguist George Lakoff. One of the basic ideas is that we think about things and organize the world into concepts in ways that correspond to how we talk about them. It’s not just that language directs thought (that’s Whorf’s idea), but that these two things are linked and our language also provides a window into how we think about things. 

Probably the most common metaphor for the brain is the “brain is a computer” metaphor, but there are other, older ideas.

The hydraulic brain

One interesting metaphor for brain and mind is the hydraulic metaphor. This probably goes back at least to Descartes (and probably earlier), who advocated a model of neural function whereby basic functions were governed by a series of tubes carrying “spirits” or vital fluids. In Descartes model, higher order thinking was handled by a separate mind that was not quite in the body. You might laugh at the ideas of brain tubes, but this idea seems quite reasonable as a theory from an era when bodily fluids were the most obvious indicators of health, sickness, and simply being alive: blood, discharge, urine, pus, bile, and other fluids are all indicators of things either working well or not working well. And when they stop, you stop. In Descartes time, these were the primary ways to understand the human body. So in the absence of other information about how thoughts and cognition occur it makes sense that early philosophers and physiologists would make an initial guess that thoughts in the brain are also a function of fluids.

Metaphors for thinking

This idea, no longer endorsed, lives on in our language in the conceptual metaphors we use to talk about the brain and mind. We often talk about cognition and thinking as information “flowing” as in the same way that fluid might flow. We have common expressions in English like the “stream of consciousness” or “waves of anxiety”, “deep thinking”, “shallow thinking”, ideas that “come to the surface”, and memories that come “flooding back” when you encounter an old friend. These all have their roots (“roots” is another conceptual metaphor of a different kind!) in the older idea that thinking and brain function are controlled by the flow of fluids through the tubes in the brain.

In the modern era, it sis still common to discuss neural activation as a “flow of information”. We might say that information “flows downstream”, or that there is a “cascade” of neural activity. Of course we don’t really mean that neural activation and cognition are flowing like water, but like so many metaphors it’s just impossible to describe things without using these expressions and in doing so, activating the common, conceptual metaphor that thinking is a fluid process.

There are other metaphors as well (like the electricity metaphor, behaviours being “hard wired”, getting “wires crossed”, an idea that “lights up”) but I think the hydraulic metaphor is my favourite because it captures the idea that cognition is fluid. We can dip our toes in the stream or hold back floods. And as you can seen from earlier posts, I have something of a soft spot for river metaphors.

 

 

Cognitive Psychology and the Smartphone

smartphone-pile-old-phone-junk

A pile of smartphones. Image from Finder

The iPhone was released 10 years ago and that got me thinking about the relationships I’ve had with smartphones and mobile devices. Of course, I remember almost all of them…almost as if they were real relationships. The first one, the Qualcomm QPC 860, was solid but simple. That was followed by a few forgettable flip phone and a Motorola “ROKR” phone that never really lived up to its promise.

But then came the iPhone, and everything changed. I started really loving my phone. I had an iPhone 3GS (sleek and black) and a white iPhone 4S which I regard at the pinnacle of iPhone design, and I still have as a backup phone. A move to Android saw a brief run with an HTC and I was been in a steady commitment with my dependable and conservative Moto X Play for over 2 years, before upgrading to a beautiful OnePlus 5t. It’s with me every single day, and almost all the time. Is that too much? Probably.

Smartphones are used for many things

There is a very good chance that you are reading this on a smartphone. Most of us have one, and we probably use it for many different tasks.

  • Communication (text, email, chat)
  • Social Media (Facebook, Twitter)
  • Taking and sharing photos
  • Music
  • Navigation
  • News and weather
  • Alarm clock

One thing that all of these tasks have in common is that the smart phone has replaced other ways of accomplishing the same tasks. That was original idea for the iPhone, one device to do many things. Not unlike “the One Ring”, the smart phone has become the one device to rule them all. Does it rule us also?

The Psychological Cost of Having a Phone

For many people, the device is always with them. Just look around a public area: it’s full of people on their phones. As such, the smartphone starts to become part of who we are. This ubiquity could have psychological consequences. And there have been several studies looking at the costs. Here are two that piqued my interest.

A few years ago, Cary Stothart did a cool study in which research participants were asked to engage in an attention monitoring task (the SART). They did the task twice, and on the second session, 1/3 of the participants received random text notifications while they did the task, 1/3 received a random call to their phone, and 1/3 proceeded as they did in the first session, which no additional interference. Participants in the control condition performed at the same level on the second session, but participants who received random notifications (text or call) made significantly more errors on the task during the second session. In other words, there was a real cost to getting a notification. Each buzz distracted the person just a bit, but enough to reduce performance.

So put your phone on “silent”? Maybe not…

A paper recently published by Adrian Ward and colleagues (Ward, Duke, Gneezy, & Bos, 2017) seems to suggest that just having your phone near you can interfere with some cognitive processing. In their study, they asked 448 undergraduate volunteers to come into the lab and participate in a series of psychological tests. Participants were randomly assigned to one of three conditions: desk, pocket/bag, or other room. People in the other room condition left all of their belongings in the lobby before entering the testing room. People in the desk condition left most of their belongings in the lobby but took their phones into the testing room and were instructed to place their phones face down on the desk. Participants in the pocket/bag condition carried all of their belongings into the testing room with them and kept their phones wherever they naturally would (usually pocket or bag). Phones were kept on silent.

The participants in all three groups then engaged in a test of working memory and executive function called the “operation span” task, in which participants had to work out basic math tests and keep track of letters (you can run the task yourself here), as well as the Raven’s progressive matrices task which is a test of fluid intelligence. The results were striking. In both cases having the phone near you significantly reduced your performance on these tasks.

A second study found that people who were more dependent were affected more by the phone. This is not good news for someone like me, who seems to always have his phone nearby. They write:

Those who depend most on their devices suffer the most from their salience, and benefit the most from their absence.

One of my graduate students is currently running a replication study of this work and (assuming it replicated) we’ll look at greater detail at just how and why the smartphone is having this effect.

Are Smartphones a Smart Idea?

Despite the many uses for these devices, I wonder how helpful they really are….for me at least. When I am writing or working, I often turn the wifi off (or use Freedom) to reduce digital distractions. But I still have my phone sitting right on the desk and I catch myself looking at it. There is a cost to that. I tell students to put their phones on silent and in their bag during an exam. There is a cost to that. I tell students to put them on the desk on silent mode during lecture. There is a cost to that. When driving, I might have the phone in view because I use it to play music and navigate with Google Maps. I use Android Auto to maximize display and mute notifications and distraction. There is a cost to that.

It’s a love hate relationship. One of the reasons I still have my iPhone4S is because it’s slow and has no email/social media apps. I’ll bring it with me on a camping trip or hike so that I have weather, maps, phone and text, but nothing else: it’s less distracting. Though it seems weird to have to own a second phone to keep me from being distracted by my real one.

Many of us spend hundreds of dollars on a smart phone and several dollars a data for a data usage plan and at the same time, have to develop strategies to avoid using the device. It’s a strange paradox of modern life that we pay to use something that we have to work hard to avoid using.

What do you think? Do you find yourself looking at your phone and being distracted? Do you have the same love/hate relationship? Let me know in the comments.

References

Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity. Journal of the Association for Consumer Research. https://doi.org/10.1086/691462

Stothart, C., Mitchum, A., & Yehnert, C. (2015). The attentional cost of receiving a cell phone notification. Journal of Experimental Psychology: Human Perception and Performance 41(4), 893–897. http://doi.org/10.1037/xhp0000100

 

Cognitive Bias and Guns in America

UPDATE Oct 3, 2017: I wrote this almost exactly two years ago, following an earlier mass shooting. I think it still applies and helps to explain why we have trouble even talking about this. 

I posed the following question this week (Oct. of 2015)  to the students in my 3rd year Psychology of Thinking class.

“How many of you think that the US is a dangerous place to visit?”

About 70% of the students raised their hands. This is surprising to me because although I live and work in Canada, and I’m a Canadian citizen, I grew up in the US; my family still lives there and I think it’s very safe place to visit. Most students justified their answer by referring to school shootings, gun violence, and problems with American police. So although none of these students had even actually encountered violence in the US, they were thinking about it because it has been in the news.

Cognitive Bias

This is a clear example of a cognitive bias known as the Availability Heuristic. The idea, originally proposed in the early 1970s by Daniel Kahneman and Amos Tversky is that people generally make judgments and decisions on the basis of relevant memories that they retrieve and that are thus available at the time that the assessment or judgement is made. In other words, when you make a judgment about a likelihood of occurrence, you search your memory and make your decision on the basis of the available evidence. Most of the time, this heuristic produces useful and correct evidence. But in other cases, the available evidence may not correspond exactly to evidence in the world. We typically overestimate the likelihood of shark attacks and random violence because these low probability events are highly salient and available in memory.

Another cognitive bias (also from Kahneman and Tversky) is known as the Representativeness Heuristic. This is the general tendency to treat individuals as representative of the the entire category. For example, if I have a concept of American gun owners as being violent (based on what I’ve read or seen in the news), I might infer that each individual American is a violent gun owner. I’d be making a generalization or a stereotype and this can lead to bias. As with Availability, the Representativeness Heuristic arrises out of the natural tendency of humans to generalize information. Most of the time, this heuristic produces useful and correct evidence. But in other cases, the representative evidence may not correspond exactly to individual evidences in the world.

The Gun Debate in the US

I’ve been thinking about this a great deal as the US engages in yet another debate about gun violence and gun control. It’s been reported widely that the US has the highest rate of private gun ownership in the world, and also has an extraordinary rate of gun violence relative to other counties. These are facts. Of course, we all know that “correlation does not equal causation” but many strong correlations often derive from a causal link.

So why to do we continue to argue about this? One problem that I rarely see being discussed is that many of us have limited experience with guns and/or violence and have to rely on what we know from memory and from external sources and we’re susceptible to cognitive biases as a result.

For example, I’m not a gun owner any more, but many of my friends and family are, and these people are some of the most responsible gun owners I know. They own firearms for sport and also for personal protection and in some cases, even run successful training courses for people to learn about gun safety. From the perspective of a responsible and passionate gun owner, it’s quite true that the problem is not guns but the bad people who use them to kill. After all, if you are safe with guns and all your friends and family are too, then you base your judgements on the available evidence: gun owners are safe and so gun violence is not a problem of guns and their owners, but a problem of criminals with bad intentions.

But what about non gun owners?  Although I do not own a gun, I feel safe at home. My personal freedoms are not infringed and I recognize that illegal guns are the problem. And so I generalize this experience and I may have difficulty understanding why people would need a gun in the first place whether for personal protection or for a vaguely defined “protection from tyranny”. From our perspective it’s far more sensible to focus on reducing the number of guns. After all we don’t have one, we don’t believe we need one, so we generalize to assume that anyone who owns firearms might be suspect or irrationally fearful.

In each case, we are relying on cognitive biases to infer things about others and about guns. These things and inferences may be stifling the debate and interfering with our ability to find a solution

How do we overcome this?

It’s not easy to overcome a bias, because these cognitive heuristics are deeply engrained and indeed arise as a necessary function of how the mind operates. They are adaptive and useful. But occasionally we need to override a bias.

Here are some proposals, but each involves taking the perspective of someone on the other side of this debate.

  1. Those of us on the left of the debate (liberals, proponents of gun regulations) should try to recognize that nearly all gun enthusiasts are safe, law abiding people who are responsible with their guns. Seen through their eyes, the problem is with irresponsible owners. An attempt to place restrictions on their legally guns activates another cognitive bias known as the endowment effect  in which people place high value on that which they already possess, and the prospect of losing this is aversive.
  2. Those on the right (gun owners) should consider the debate from the perspective of non gun owners and might also consider that proposals to regulate firearms are not attempt to seize or ban guns but rather attempts to address one aspect of the problem: the sheer number of guns in the US, all of which could potentially be used for illegal purposes. We’re not trying to ban guns, but rather to regulate the and encourage greater responsibility in their use.

I think these things are important to deal with. The US really does have a problem with gun violence. It’s disproportionally high. Solutions to this problem must recognize the reality of the large number of guns, the perspective of non gun owners, and the perspectives of gun owners. We’re not going to do this by simply arguing for one side. We’re only going to do this by first recognizing these cognitive biases and them attempting to overcome them in ways that search for common ground.

As always: comments are welcome.

A Computer Science Approach to Linguistic Archeology and Forensic Science

Last week (Sept 2014),  I heard a story on NPR’s morning edition that really got me thinking…(side note, I’m in Ontario so there is no NPR but my favourite station is WKSU via TuneIn radio on my smart phone). It was a short story, but I thought it was one the most interesting I’ve heard in last few months, and it got me thinking about how computer science has been used to understand natural language cognition.

Linguistic Archeology

Here is a link to the actual story (with transcript). MIT computer scientist Boris Katz realized that when people learn English as second language, they make certain errors that are a function of their native language (e.g. native Russian speakers leave out articles in English). This is not a novel finding, people have known this. Katz, by the way, is one of many scientists that worked with Watson, the IBM computer that competed on jeopardy

Katz trained a computer model to learn from samples of English text productions such that it could detect the writer’s native language based on errors in their written English text. But the model also learned to determine similarities among other native languages. The model discovered, based on errors in English, that Polish and Russian have historical overlap. In short, the model was able to determinethe well know linguistic family tree among many natural languages.

The next step is to use the model to uncover new things about dying or languages. As Katz says

But if those dying languages have left traces in the brains of some of those speakers and those traces show up in the mistakes those speakers make when they’re speaking and writing in English, we can use the errors to learn something about those disappearing languages.”

Computational Linguistic Forensics

This is only one example. Another one that fascinated me was the work of Ian Lancashire, an English professor at the University of Toronto and Graeme Hirst, a professor in the computer science department. The noticed that the output of Agatha Christie—she wrote around 80 novels, and many short stories— declined in quality in her later years. That itself is not surprising, but they thought there was a pattern. After digitizing her work, they analyzed the technical quality of her output and found richness of her vocabulary fell by one-fifth between the earliest two works and the final two works. That, and other patterns, are more consistent with Alzheimer’s than normal aging. In short, they are tentatively diagnosing Christie with Alzheimer disease, based on her written work. You can read a summary HERE and you can read the actual paper HERE.  It’s really cool work.

Text Analysis at Large

I think this work is really fascinating and exciting. It highlights just how much can be understood via text analysis. Some of the this is already commonplace. We educators rely on software to detect plagiarism. Facebook and Google are using these tools as well. One assumes that the NSA might be able to rely on many of these same ideas to infer and predict information and characteristics about the author of some set of written statements. And if a computer can detect a person’s linguistic origin from English textual errors, I’d imagine it can be trained to mimic the same effects and produce English that looks like  it was written by a native speaker of another language…but was not. That’s slightly unnerving…