Category Archives: cognitive science

The Language of Sexual Violence

GettyImages_1043787558.0

Women’s March leaders address a rally against the confirmation of Supreme Court nominee Judge Brett Kavanaugh in front of the court building on September 24.
 Chip Somodevilla/Getty Images

The language we use to describe something can provide insights into how we think about it. For example, we all reserve words for close family members (“Mama” or “Papa”) that have special meaning and these words are often constrained by culture. And as elements of culture, there are times when the linguistic conventions can tell us something very deep about how our society think about events.

Current Events

This week (late September 2018) has been a traumatic and dramatic one. A Supreme Court nominee, Brett Kavanaugh was accused of an attempted rape 35 years ago. Both he and the accuser, Christine Blasey Ford were interviewed at a Senate hearing. And much has been written and observed about they ways they spoke and communicated during this hearing. At the same time, many women took to social media to describe their own experiences with sexual violence. I have neither academic expertise nor personal experience with sexual violence. But like many, I’ve followed these events with shock and with heartbreak.

Survivors

I’ve noticed something this week about how women who have been victims of sexual violence talk about themselves and the persons who carried out the assault. First of all, many women identify as survivors and not victims. A victim is someone who had something happen to them. A survivor is someone who has been able to overcome (or is working to overcome) those bad things. I don’t know if this is a conscious decision or not, though it could be. It is an effective way for a woman who had been a victim to show that they are a survivor. I think that many women use this term intentionally to show that they have survived something.

Part of The Self

But there is another linguistic construction that is even more interesting. I’ve noticed, especially in the news and on social media, that women say or write  my rapist” or “my abuser”,  or “my assailant”.  I don’t believe this is intentional or affected. I think this is part of the language because it’s part of how the person thinks about the event. Or maybe part of how society thinks about the event. The language suggests that women have internalized the identity of the perpetrator and that the event and the abuser has also become part of who they are as women.  It’s deep and consequential in ways that few other events are.

Of course a sexual assault would be expected to be traumatic and even life changing, but I’m struck by how this is expressed in the idioms and linguistic conventions women use to describe the event. Their language suggests some personal ownership. It’s more than a memory for an event or an episode. It’s a memory for person, a traumatic personal event, and also knowledge of the self. Autonoetic memory is deeply ingrained. It is “Indelible in the hippocampus

All of us talk this way sometimes, of course. If you say “this cat” it’s different from saying “my cat”. The former is an abstraction or general conceptual knowledge. The latter is your pet. It’s part of your identity. “My mother”, “my car”, “my smartphone” are more personal but still somewhat general. But “my heart”,  my child ‘, “my body” , and “my breath” are deeply personal and these things are just part of who we are.

Women don’t use this construction when talking about non sexual violence. They might say “the person who cut me off” or “the guy who robbed me” . Similarly, men who have been assaulted don’t use this language . They say “the man who assaulted me. “ or “the guy who punched me”, or even “the priest who abused me” . And men do not use this language to refer to people that have assaulted (e.g. “my victim“). You might occasional hear or read men refer to “my enemy or “my rival” which, I think, has the same deeper, more profound meaning as the terms used by women for sexual violence but not as traumatic. So by and large this seems to be something that women say about sexual violence specifically.

Deep and Personal Memory

So when a woman, says “my rapist“ it suggests a deep and personal knowledge.  Knowledge that has and will stay with them, affect their lives, and affect how they think about the event and themselves. Eyewitness memory is unreliable. Memory for facts and events—even personal ones—are malleable. But you don’t forget who someone is. You don’t forget the sound of your sibling’s voice. You don’t forget sight of your children. You don’t forget your address. You don’t forget your enemy…and you would not forget your abuser or your rapist.

The Cognitive Science Age

namib

Complex patterns in the Namib desert resemble neural networks.

The history of science and technology is often delineated by paradigm shifts. A paradigm shift is a fundamental change in how we view the world and our relationship with it. The big paradigm shifts are sometimes even referred to as an “age” or a “revolution”. The Space Age is a perfect example. The middle of the 20th Century saw not only an incredible increase in public awareness of space and space travel, but many of the industrial and technical advances that we now take for granted were byproducts of the Space Age. 

The Cognitive Science Age

It’s probably cliche to write this but I believe we are at the beginning of a new age, and a new, profound paradigm shift. I think we’re well into the Cognitive Science Age. I’m not sure anyone calls it that, but I think that is what truly defines the current era. And I also think that an understanding of Cognitive Science is essential for understanding our relationships with the world and with each other. 

I say this because in the 21st century, artificial intelligence, machine learning, and deep learning are now being fully realized. Every day, computers are solving problems, making decisions, and making accurate predictions about the future…about our future. Algorithms decide our behaviours in more ways that we realize. We look forward to autonomous vehicles that will depend of the simultaneous operation of many computers and algorithms. Machines will (and have) become central to almost everything.

And this is a product of Cognitive Science. As cognitive scientists, this new age is our idea, our modern Prometheus.

Cognitive Science 

Cognitive Science is an interdisciplinary field that first emerged in the 1950s and 1960s and sought to study cognition, or information processing, as its own area of study rather than as a strictly human psychological concept. As a new field, it drew from Cognitive Psychology, Philosphy, Linguistics, Economics, Computer Science, Neuroscience, and Anthropology. Although people still tend to work and train in those more established traditional fields, it seems to me that society as a whole is in debt to the interdisciplinary nature of Cognitive Science. And although it is a very diverse field, the most important aspect in my view is the connection between biology, computation, and behaviour.

The Influence of Biology

A dominant force in modern life is the algorithm, as computational engine to process information and make predictions. Learning algorithms take in information, learn to make associations, make predictions from those associations, and then adapt and change. This is referred to as machine learning, but the key here is that machines learn biologically,

For example, the algorithm (Hebbian Learning) that drives machine learning was discovered by the psychologist and neuroscientist Donald Hebb at McGill university. Hebb’s book on the The Organization of Behaviour  in 1949 is one of the most important books written in this field and explained how neurons learn associations. This concept was refined mathematically by the Cognitive Scientists Marvin Minsky, David Rumlehart, James McLelland, Geoff Hinton, and many others. The advances we see now in machine learning and deep learning are a result of Cognitive Scientists learning how to adapt and build computer algorithms to match algorithms already seen in neurobiology. This is a critical point: It’s not just that computers can learn, but that the learning and adaptability of these systems is grounded in an understanding of neuroscience. That’s the advantage of an interdisciplinary approach.

The Influence of Behaviour 

As another example, the theoretical grounding for the AI revolution was developed by Allen Newell (a computer scientist) and Herbert Simon (an economist). Their work in the 1950s-1970 to understand human decision making and problem solving and how to model it mathematically is provided a computational approach that was grounded in an understanding of human behaviour. Again, this an advantage of the interdisciplinary approach afforded by Cognitive Science. 

The Influence of Algorithms on our Society 

Perhaps one of the most salient and immediately present ways to see the influence of Cognitive Science is in the algorithms that drive the many products that we use online. Google is many things, but at its heart, it is a search algorithm and a way to organize the knowledge in the world so that the information that a user needs can be found. The basic ideas of knowledge representation that underlie Google’s categorization of knowledge were explored early on by Cognitive Scientists like Eleanor Rosch and John Anderson in the 1970s and 1980s. 

Or consider Facebook. The company runs and designs a sophisticated algorithm that learns about what you value and makes suggestions about what you want to see more of. Or, maybe more accurately, it makes suggestions for what the algorithm predicts will help you to expand your Facebook network… predictions for what will make you use Facebook more. 

In both of these cases, Google and Facebook, the algorithms are learning to connect the information that they acquire from the user, from you, with the existing knowledge in the system to make predictions that are useful and adaptive for the users, so that the users will provide more information to the system, so that it can refine its algorithm and acquire more information, and so on. As the network grows, it seeks to become more adaptive, more effective, and more knowledgeable. This is what your brain does, too. It causes you to engage in behaviour that seeks information to refine its ability to predict and adapts. 

These networks and algorithms are societal minds; They serve the same role for society that our own network of neurons serves our body. Indeed, these algorithms can even  change society. This is something that some people fear. 

Are Fears of the Future Well Founded?

When tech CEOs and politicians worry about the dangers of AI, I think that idea is at the core of their worry. The idea that the algorithms to which we entrust increasingly more of our decision making are altering our behaviour to serve the algorithm in the same way that our brain alters our behaviour to serve our own minds and body is somethings that strikes many as unsettling and unstoppable. I think these fears are founded and unavoidable, but like any new age or paradigm shift, we should continue to approach and understand this from scientific and humanist directions. 

The Legacy of Cognitive Science

The breakthroughs of the 20th and 21st centuries arose as a result of exploring learning algorithms in biology, the instantiation of those algorithms in increasingly more powerful computers, and the relationship of both of these concepts to behaviour. The technological improvements in computing and neuroscience have enabled these ideas to become a dominant force in the modern world. Fear of a future dominated by non-human algorithms and intelligence may be unavoidable at times but and understanding of Cognitive Science is crucial to being able to survive and adapt.

 

The fluidity of thought

Knowing something about the basic functional architecture of the brain is helpful in understanding the organization of the mind and in understanding how we think and behave. But when we talk about the brain, it’s nearly impossible to do so without using conceptual metaphors (when we talk about most things, it’s impossible to do so without metaphors). 

Conceptual metaphor theory is a broad theory of language and thinking from the extraordinary linguist George Lakoff. One of the basic ideas is that we think about things and organize the world into concepts in ways that correspond to how we talk about them. It’s not just that language directs thought (that’s Whorf’s idea), but that these two things are linked and our language also provides a window into how we think about things. 

Probably the most common metaphor for the brain is the “brain is a computer” metaphor, but there are other, older ideas.

The hydraulic brain

One interesting metaphor for brain and mind is the hydraulic metaphor. This probably goes back at least to Descartes (and probably earlier), who advocated a model of neural function whereby basic functions were governed by a series of tubes carrying “spirits” or vital fluids. In Descartes model, higher order thinking was handled by a separate mind that was not quite in the body. You might laugh at the ideas of brain tubes, but this idea seems quite reasonable as a theory from an era when bodily fluids were the most obvious indicators of health, sickness, and simply being alive: blood, discharge, urine, pus, bile, and other fluids are all indicators of things either working well or not working well. And when they stop, you stop. In Descartes time, these were the primary ways to understand the human body. So in the absence of other information about how thoughts and cognition occur it makes sense that early philosophers and physiologists would make an initial guess that thoughts in the brain are also a function of fluids.

Metaphors for thinking

This idea, no longer endorsed, lives on in our language in the conceptual metaphors we use to talk about the brain and mind. We often talk about cognition and thinking as information “flowing” as in the same way that fluid might flow. We have common expressions in English like the “stream of consciousness” or “waves of anxiety”, “deep thinking”, “shallow thinking”, ideas that “come to the surface”, and memories that come “flooding back” when you encounter an old friend. These all have their roots (“roots” is another conceptual metaphor of a different kind!) in the older idea that thinking and brain function are controlled by the flow of fluids through the tubes in the brain.

In the modern era, it sis still common to discuss neural activation as a “flow of information”. We might say that information “flows downstream”, or that there is a “cascade” of neural activity. Of course we don’t really mean that neural activation and cognition are flowing like water, but like so many metaphors it’s just impossible to describe things without using these expressions and in doing so, activating the common, conceptual metaphor that thinking is a fluid process.

There are other metaphors as well (like the electricity metaphor, behaviours being “hard wired”, getting “wires crossed”, an idea that “lights up”) but I think the hydraulic metaphor is my favourite because it captures the idea that cognition is fluid. We can dip our toes in the stream or hold back floods. And as you can seen from earlier posts, I have something of a soft spot for river metaphors.

 

 

Cognitive Psychology and the Smartphone

smartphone-pile-old-phone-junk

A pile of smartphones. Image from Finder

The iPhone was released 10 years ago and that got me thinking about the relationships I’ve had with smartphones and mobile devices. Of course, I remember almost all of them…almost as if they were real relationships. The first one, the Qualcomm QPC 860, was solid but simple. That was followed by a few forgettable flip phone and a Motorola “ROKR” phone that never really lived up to its promise.

But then came the iPhone, and everything changed. I started really loving my phone. I had an iPhone 3GS (sleek and black) and a white iPhone 4S which I regard at the pinnacle of iPhone design, and I still have as a backup phone. A move to Android saw a brief run with an HTC and I was been in a steady commitment with my dependable and conservative Moto X Play for over 2 years, before upgrading to a beautiful OnePlus 5t. It’s with me every single day, and almost all the time. Is that too much? Probably.

Smartphones are used for many things

There is a very good chance that you are reading this on a smartphone. Most of us have one, and we probably use it for many different tasks.

  • Communication (text, email, chat)
  • Social Media (Facebook, Twitter)
  • Taking and sharing photos
  • Music
  • Navigation
  • News and weather
  • Alarm clock

One thing that all of these tasks have in common is that the smart phone has replaced other ways of accomplishing the same tasks. That was original idea for the iPhone, one device to do many things. Not unlike “the One Ring”, the smart phone has become the one device to rule them all. Does it rule us also?

The Psychological Cost of Having a Phone

For many people, the device is always with them. Just look around a public area: it’s full of people on their phones. As such, the smartphone starts to become part of who we are. This ubiquity could have psychological consequences. And there have been several studies looking at the costs. Here are two that piqued my interest.

A few years ago, Cary Stothart did a cool study in which research participants were asked to engage in an attention monitoring task (the SART). They did the task twice, and on the second session, 1/3 of the participants received random text notifications while they did the task, 1/3 received a random call to their phone, and 1/3 proceeded as they did in the first session, which no additional interference. Participants in the control condition performed at the same level on the second session, but participants who received random notifications (text or call) made significantly more errors on the task during the second session. In other words, there was a real cost to getting a notification. Each buzz distracted the person just a bit, but enough to reduce performance.

So put your phone on “silent”? Maybe not…

A paper recently published by Adrian Ward and colleagues (Ward, Duke, Gneezy, & Bos, 2017) seems to suggest that just having your phone near you can interfere with some cognitive processing. In their study, they asked 448 undergraduate volunteers to come into the lab and participate in a series of psychological tests. Participants were randomly assigned to one of three conditions: desk, pocket/bag, or other room. People in the other room condition left all of their belongings in the lobby before entering the testing room. People in the desk condition left most of their belongings in the lobby but took their phones into the testing room and were instructed to place their phones face down on the desk. Participants in the pocket/bag condition carried all of their belongings into the testing room with them and kept their phones wherever they naturally would (usually pocket or bag). Phones were kept on silent.

The participants in all three groups then engaged in a test of working memory and executive function called the “operation span” task, in which participants had to work out basic math tests and keep track of letters (you can run the task yourself here), as well as the Raven’s progressive matrices task which is a test of fluid intelligence. The results were striking. In both cases having the phone near you significantly reduced your performance on these tasks.

A second study found that people who were more dependent were affected more by the phone. This is not good news for someone like me, who seems to always have his phone nearby. They write:

Those who depend most on their devices suffer the most from their salience, and benefit the most from their absence.

One of my graduate students is currently running a replication study of this work and (assuming it replicated) we’ll look at greater detail at just how and why the smartphone is having this effect.

Are Smartphones a Smart Idea?

Despite the many uses for these devices, I wonder how helpful they really are….for me at least. When I am writing or working, I often turn the wifi off (or use Freedom) to reduce digital distractions. But I still have my phone sitting right on the desk and I catch myself looking at it. There is a cost to that. I tell students to put their phones on silent and in their bag during an exam. There is a cost to that. I tell students to put them on the desk on silent mode during lecture. There is a cost to that. When driving, I might have the phone in view because I use it to play music and navigate with Google Maps. I use Android Auto to maximize display and mute notifications and distraction. There is a cost to that.

It’s a love hate relationship. One of the reasons I still have my iPhone4S is because it’s slow and has no email/social media apps. I’ll bring it with me on a camping trip or hike so that I have weather, maps, phone and text, but nothing else: it’s less distracting. Though it seems weird to have to own a second phone to keep me from being distracted by my real one.

Many of us spend hundreds of dollars on a smart phone and several dollars a data for a data usage plan and at the same time, have to develop strategies to avoid using the device. It’s a strange paradox of modern life that we pay to use something that we have to work hard to avoid using.

What do you think? Do you find yourself looking at your phone and being distracted? Do you have the same love/hate relationship? Let me know in the comments.

References

Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity. Journal of the Association for Consumer Research. https://doi.org/10.1086/691462

Stothart, C., Mitchum, A., & Yehnert, C. (2015). The attentional cost of receiving a cell phone notification. Journal of Experimental Psychology: Human Perception and Performance 41(4), 893–897. http://doi.org/10.1037/xhp0000100

 

A Computer Science Approach to Linguistic Archeology and Forensic Science

Last week (Sept 2014),  I heard a story on NPR’s morning edition that really got me thinking…(side note, I’m in Ontario so there is no NPR but my favourite station is WKSU via TuneIn radio on my smart phone). It was a short story, but I thought it was one the most interesting I’ve heard in last few months, and it got me thinking about how computer science has been used to understand natural language cognition.

Linguistic Archeology

Here is a link to the actual story (with transcript). MIT computer scientist Boris Katz realized that when people learn English as second language, they make certain errors that are a function of their native language (e.g. native Russian speakers leave out articles in English). This is not a novel finding, people have known this. Katz, by the way, is one of many scientists that worked with Watson, the IBM computer that competed on jeopardy

Katz trained a computer model to learn from samples of English text productions such that it could detect the writer’s native language based on errors in their written English text. But the model also learned to determine similarities among other native languages. The model discovered, based on errors in English, that Polish and Russian have historical overlap. In short, the model was able to determine the well know linguistic family tree among many natural languages.

The next step is to use the model to uncover new things about dying or languages. As Katz says

But if those dying languages have left traces in the brains of some of those speakers and those traces show up in the mistakes those speakers make when they’re speaking and writing in English, we can use the errors to learn something about those disappearing languages.”

Computational Linguistic Forensics

This is only one example. Another one that fascinated me was the work of Ian Lancashire, an English professor at the University of Toronto and Graeme Hirst, a professor in the computer science department. The noticed that the output of Agatha Christie—she wrote around 80 novels, and many short stories— declined in quality in her later years. That itself is not surprising, but they thought there was a pattern. After digitizing her work, they analyzed the technical quality of her output and found richness of her vocabulary fell by one-fifth between the earliest two works and the final two works. That, and other patterns, are more consistent with Alzheimer’s than normal aging. In short, they are tentatively diagnosing Christie with Alzheimer disease, based on her written work. You can read a summary HERE and you can read the actual paper HERE.  It’s really cool work.

Text Analysis at Large

I think this work is really fascinating and exciting. It highlights just how much can be understood via text analysis. Some of the this is already commonplace. We educators rely on software to detect plagiarism. Facebook and Google are using these tools as well. One assumes that the NSA might be able to rely on many of these same ideas to infer and predict information and characteristics about the author of some set of written statements. And if a computer can detect a person’s linguistic origin from English textual errors, I’d imagine it can be trained to mimic the same effects and produce English that looks like  it was written by a native speaker of another language…but was not. That’s slightly unnerving…