Cognitive Psychology and the Smartphone

smartphone-pile-old-phone-junk
A pile of smartphones. Image from Finder

The iPhone was released 10 years ago and that got me thinking about the relationships I’ve had with smartphones and mobile devices. Of course, I remember almost all of them…almost as if they were real relationships. The first one, the Qualcomm QPC 860, was solid but simple. That was followed by a few forgettable flip phone and a Motorola “ROKR” phone that never really lived up to its promise.

But then came the iPhone, and everything changed. I started really loving my phone. I had an iPhone 3GS (sleek and black) and a white iPhone 4S which I regard at the pinnacle of iPhone design, and I still have as a backup phone. A move to Android saw a brief run with an HTC and I was been in a steady commitment with my dependable and conservative Moto X Play for over 2 years, before upgrading to a beautiful OnePlus 5t. It’s with me every single day, and almost all the time. Is that too much? Probably.

Smartphones are used for many things

There is a very good chance that you are reading this on a smartphone. Most of us have one, and we probably use it for many different tasks.

  • Communication (text, email, chat)
  • Social Media (Facebook, Twitter)
  • Taking and sharing photos
  • Music
  • Navigation
  • News and weather
  • Alarm clock

One thing that all of these tasks have in common is that the smart phone has replaced other ways of accomplishing the same tasks. That was original idea for the iPhone, one device to do many things. Not unlike “the One Ring”, the smart phone has become the one device to rule them all. Does it rule us also?

The Psychological Cost of Having a Phone

For many people, the device is always with them. Just look around a public area: it’s full of people on their phones. As such, the smartphone starts to become part of who we are. This ubiquity could have psychological consequences. And there have been several studies looking at the costs. Here are two that piqued my interest.

A few years ago, Cary Stothart did a cool study in which research participants were asked to engage in an attention monitoring task (the SART). They did the task twice, and on the second session, 1/3 of the participants received random text notifications while they did the task, 1/3 received a random call to their phone, and 1/3 proceeded as they did in the first session, which no additional interference. Participants in the control condition performed at the same level on the second session, but participants who received random notifications (text or call) made significantly more errors on the task during the second session. In other words, there was a real cost to getting a notification. Each buzz distracted the person just a bit, but enough to reduce performance.

So put your phone on “silent”? Maybe not…

A paper recently published by Adrian Ward and colleagues (Ward, Duke, Gneezy, & Bos, 2017) seems to suggest that just having your phone near you can interfere with some cognitive processing. In their study, they asked 448 undergraduate volunteers to come into the lab and participate in a series of psychological tests. Participants were randomly assigned to one of three conditions: desk, pocket/bag, or other room. People in the other room condition left all of their belongings in the lobby before entering the testing room. People in the desk condition left most of their belongings in the lobby but took their phones into the testing room and were instructed to place their phones face down on the desk. Participants in the pocket/bag condition carried all of their belongings into the testing room with them and kept their phones wherever they naturally would (usually pocket or bag). Phones were kept on silent.

The participants in all three groups then engaged in a test of working memory and executive function called the “operation span” task, in which participants had to work out basic math tests and keep track of letters (you can run the task yourself here), as well as the Raven’s progressive matrices task which is a test of fluid intelligence. The results were striking. In both cases having the phone near you significantly reduced your performance on these tasks.

A second study found that people who were more dependent were affected more by the phone. This is not good news for someone like me, who seems to always have his phone nearby. They write:

Those who depend most on their devices suffer the most from their salience, and benefit the most from their absence.

One of my graduate students is currently running a replication study of this work and (assuming it replicated) we’ll look at greater detail at just how and why the smartphone is having this effect.

Are Smartphones a Smart Idea?

Despite the many uses for these devices, I wonder how helpful they really are….for me at least. When I am writing or working, I often turn the wifi off (or use Freedom) to reduce digital distractions. But I still have my phone sitting right on the desk and I catch myself looking at it. There is a cost to that. I tell students to put their phones on silent and in their bag during an exam. There is a cost to that. I tell students to put them on the desk on silent mode during lecture. There is a cost to that. When driving, I might have the phone in view because I use it to play music and navigate with Google Maps. I use Android Auto to maximize display and mute notifications and distraction. There is a cost to that.

It’s a love hate relationship. One of the reasons I still have my iPhone4S is because it’s slow and has no email/social media apps. I’ll bring it with me on a camping trip or hike so that I have weather, maps, phone and text, but nothing else: it’s less distracting. Though it seems weird to have to own a second phone to keep me from being distracted by my real one.

Many of us spend hundreds of dollars on a smart phone and several dollars a data for a data usage plan and at the same time, have to develop strategies to avoid using the device. It’s a strange paradox of modern life that we pay to use something that we have to work hard to avoid using.

What do you think? Do you find yourself looking at your phone and being distracted? Do you have the same love/hate relationship? Let me know in the comments.

References

Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain Drain: The Mere Presence of One’s Own Smartphone Reduces Available Cognitive Capacity. Journal of the Association for Consumer Research. https://doi.org/10.1086/691462

Stothart, C., Mitchum, A., & Yehnert, C. (2015). The attentional cost of receiving a cell phone notification. Journal of Experimental Psychology: Human Perception and Performance 41(4), 893–897. http://doi.org/10.1037/xhp0000100

 

Inspiration in the Lab

I run a mid sized cognitive psychology lab: it’s me as the PI, 2 PhD students 3 master’s students and a handful of undergraduate honours students and RAs. We are a reasonably productive lab, but there are times when I think we could be doing more in terms of getting our work out and also coming up with innovative and creative ideas.

Lately I’ve been thinking of ways to break out of our routines. Research, in my opinion, should be a combination of repetition (writing, collecting data, running an analysis in R) but also innovation where we look at new techniques, new ideas, new explanations. How to balance these?  Also, I want to increase collaborative problem solving in my lab. Often a student has a data set and the most common process is the student and I working together, or me reviewing what she or he has done. But sometimes it would be great if we’re all aware of the challenges and promises of each other’s work. We have weekly lab meetings, but that’s not always enough.

What follows are some ideas I’d like to implement in the near future. I’d love to hear what works (and does not work) from other scientists.

An Afternoon of Code

We rely on software (PsychoPy, Python, R, and E-Prime) to collect behavioural data. We have several decent programs to run the experiments we want to run, but that is often a bottleneck, and all of us sometime struggle to translate ideas into code. One way to work on this might be to have a coding retreat or an afternoon of coding. We all agree to meet in my lab and we work on shared task or designing a paradigm that we’ve never used before. I’d put up a prize for the first student to solve the problem. As an example, I’m looking to get a version of the classic “weather prediction task“. We might agree to spend a day working on this, maybe each on our own program, but at the same time so we can share ideas.

Data Visualization and Analysis

Similar to the idea above, I am thinking of ways to improve our skills on R-Studio. One idea might be to have a set of data from the most recent study in our lab and we spend a day working together on R-Studio to explore different visualizations, techniques for parsing, etc. We each know different things and R allows for so much customization, but it would be helpful to be aware of each other’s skill set.

Writing at the Pub

Despite some of its limitations, I’ve been using Google Docs as a way to prepare manuscripts for publication. It’s not much worse than Word but really allows for better collaborative work and integrates smoothly with #Slack. With the addition of Paperpile, it’s a very competent document preparation system. So I thought about setting aside a few hours in the campus pub, bring our laptops, and all write together. Lab members that are working together on a paper can write simultaneously. Or we might pick one paper, and even grad students who are not authors per se would still be able to help with edits and idea. Maybe start with coffee/tea…then a beer or two.

Internal Replications

I’ve also thought about spending some time designing and implementing replications of earlier work. We already do this to some degree, but I have many published studies from 10 or more years ago that might be worth revisiting. I thought of meeting once every few month with my team to look at these and pick one to replicate. Then we work as a team to try to replicate the study as if it were someone else’s work (not ours) and run a full study. This would be done along side the new/current work in our lab.

Chefs learn by repeating the basic techniques over and over again until they master them and can produce a simple dish perfectly each time. I can think of no reason not to employ the same technique in my lab. I think the repetitive, inward focused nature of a task like this might also lead to new insights as we rediscover what led up to design a task or experiment in a certain way.

Conclusion

I am planning on taking these ideas to my trainees at a one of our weekly lab in the next few weeks. My goal is to just try a few new things to break up the routine. I’d welcome any comments, ideas, or suggestions.

Taming My Distracted Mind

There is mounting evidence that digital devices, screens, smartphones are a real roadblock to productivity. The very tools that are supposed to make us more productive might be robbing us of that ability.

The Modern Worker

I’m a psychology professor at a large research institution. This means that although I do spend some time teaching in a large lecture hall, mostly I’m in my office writing, reading, doing email, attending meetings, and planning…that is, spending my time like many other modern workers. I’ve been at this for a while and I can still recall a time when not everyone had an email address, when research articles had to be printed, when submitting my work to a journal meant actually mailing four identical copies of the manuscript to the publisher. But nearly all of that is now done on line. I sit at my desk to do email, to read, to analyze data, to access research papers, to grade assignments, comment on student work …everything. And lately this has expanded to me writing and managing email at home, at breakfast on my phone, reading email in a faculty meeting on my phone, in bed on my phone…in the bathroom on my phone. An really…why am I doing my work in the bathroom?

What’s more, everything is being carried out on a device or a browser that is also used for recreation media consumption and social media. I read news, play games, and watch baseball games on my laptop. I watch sports on my laptop and tweet about the game at the the same time.

What this means is that my workstation is essentially also a playstation.

A Tired Mind

Lately I’m finding that a week of desk/computer work leaves my mind feeling like mush. Much more cognitive fatigue that there used to be. I’m less able to focus on my work. I can’t read the whole way through a paper. I’ll start and email and write two lines and then my attention wanders. It did not used to be this way, and it’s not just because I’m getting older (I’m a few weeks shy of 47). I think my work habits have begun to tire me out.

Meditation does help in this regard…I can meditate for 10–15 minutes with little difficulty. And running helps too: I can run for an hour without getting bored and feel refreshed (not tired)

But the minute I’m at my desk I slide right back into the habit of having 10 browser tabs open…each one vying for my attention.  No matter what I try, the second I sit down at my university office or home office to write, I lose my ability to concentrate on my work. It starts with email, and then 10 minutes of local news, maybe twitter….some more email. And back and forth and them I’m still working on the same email.

Some remedies

I’ve started taking steps this week to create some “digital distance” at work. Small habits to try to improve my work experience. None of this is scientific: I’m just trying to retrain. And I’m not so much interested in being more productive…just less tired.

  1. I’m printing more and screen reading less. This goes for articles, student work, and editing my own work. (don’t worry: I’m recycling the paper by printing on the back of other used paper!)
  2. This is big one: After many years of running everything through a browser and Gmail, I’m switching back to an actual email client (Spark Mail App for mac). That way, when I decide to do email, I’m ONLY doing email and not tempted to read FB, Twitter, news, etc. in another tab. Gmail or Outlook webmail was killing me for that because “hey you already have Chrome open, just leave a tab open for twitter”. So Chrome is closed when I’m responding to email.
  3. My lab and my graduate students are now on Slack (not email) so that when I’m doing project management, research planning, and advising, I can concentrate on that and nothing else. I close can Chrome and email
  4. I’ve turned all the notifications off on my smart phone, except texts/calls from my wife & kids, and their school.
  5. No posting to social media in the morning, because I’ll just be thinking about whether not there are hits. This is another big one. I’ll post something at breakfast or comment and then keep checking.I’ve already completely deactivated Facebook to make this even easier. My students and I are even carrying out a research study on this specific topic (more detail on that later..when the data are in).

I’m curious if others are finding similar things. Do you think that your productivity has waned? Do you think that working all day on a screen is reducing your ability to concentrate? Have you taken steps to correct this or retrain your mind? I’d be interested in hearing.

Le biais de confirmation: The story of the Bilingual Advantage 

The newest salvo in the psychology’s “reproducibility crisis” is not in social psychology, but is hitting the field of psycholinguistics. In this case, the evidence is mounting that the so called “bilingualism advantage” may not be an advantage after all. Worse, it may be something like the Mozart Effect for psycholinguistics…That is, an effect that is plausible and desirable enough (and marketable) that we all believe it and ignore reputable counter evidence.

Full disclose, our children have attended a French Immersion school, and as we live in a country with two official languages, I think it’s important to know some of both. But I’m not bilingual myself (6 years of German in high school and university, but I no longer speak it). So I like the idea of a second language. And I like the idea of the bilingual advantage too. I’ve assumed that this advantage is present and measurable, but now I’m not so sure. This controversy is worth paying attention to.

The Bilingual Advantage

The story goes like this. People who speak two languages fluently are constantly switching between them. They have to inhibit one language in order to respond in the other. And because switching and inhibition are two of most well-known and well-studied aspects of the cognitive control system known as the executive functions, it’s assumed that bilinguals would be especially adept at these behaviours. And if they are good and switching and inhibiting within language, they may have a general executive functioning advantage for behaviours as well.

Ellen Bialystok at York University and others have investigated this claim and have produced quite a lot of data in favour of the idea that general executive functioning abilities are superior in bilinguals relative to English speaking persons. The advantages might also persist into old age and the may guard against cognitive decline in aging. Dr. Bialystock’s work is extensive has relied on many different measures and she’s arguably one of the towing figures in psycholinguistics.

Does this work Generalize?

An article in Feb 2016 in The Atlantic has suggested that recent attempts to replicate and generalize some of this work have not been successful. Specifically, the work of  Kenneth Paap, at San Francisco State has argued that there is no general advantage for bilinguals and that any advantages are either very local (confined to language) or are artifacts of small sample size studies with idiosyncratic groups. Systematic attempts to replicate the work have not been successful. Reviews of published studies found evidence of publication bias. And other psychologists have found the same thing. In other words, according to Paap, whatever advantages these groups might have shown in early studies, they can’t really be attributed to bilingualism.

The Battle Lines

By all accounts, this is a showdown of epic proportions. According to the Atlantic article, Paap has thrown the gauntlet down and said  (paraphrased) “Let’s work together, register the studies, collect good data, and get to the bottom of it.” Even a co-author of one of the most well-cited bilingualism advantage papers is now questioning the work. My colleague at Western, J. Bruce Morton, is quoted as saying:

“If people were really committed to getting to the bottom of this, we’d get together, pool our resources, study it, and that would be the end of the issue. The fact that people won’t do that suggest to me that there are those who are profiting from either perpetuating the myth or the criticism.”

But proponents of the advantage are not interested in this, suggesting that the Paap and others are not properly controlling their work and also pointing to their recent work with brain imaging (which gets away from the less than idea executive functioning tasks but also could fall prey to the Seductive allure of Neuroscience…which is another topic for another day).

This is, I think, a real scientific controversy. I think we should get to the bottom of it. If the advantage is robust and general, then it’s going to show up in these newer studies. If it’s not, then it becomes an outmoded idea (like so many in psychology and in science). That’s progress. There is the risk that inherent appeal of the advantage will allow it to persist even if the science does not back it it, and that’s problematic.

Redesigning my Graduate Seminar

I’m writing this to crowdsource and to get advice on course design, so this might not be the most interesting blog entry I’ve ever written. But if this is the kind of thing that intrigues you…read on!

I teach a graduate seminar on cognition every two years. It’s one of the required courses for our cognition graduate students (Master’s and PhD) and covers the fundamental issues in the field, and also to help prepare students for the PhD exams. I usually have a few students from other areas in psychology (e.g. social psychology) as well as student from other departments (philosophy, neuroscience, and business). I’ve run this class since 2007, and I’m thinking of overhauling some of the topics for the Fall 2015 session. An example of the most recent syllabus can be found HERE.

The basic course

The official description of the course as it stands now is as follows.

“This course aims to provide graduate students with exposure to classic and current research in cognitive psychology. We will read and discuss articles on the major topics in the field, including high-level perception, categorization, attention, working memory, knowledge, language, and thought. The readings will encompass theoretical approaches, behavioural research, computational modelling, and cognitive neuroscience. Meetings will follow a seminar format, in which students will discuss the readings for each class. To frame the discussion for each meeting, the instructor will provide background and any needed tutorials. Marks will be based on participation and written work.”

The class has a midterm, a final paper, and discussion papers. I cover topics like:

  • Viewpoint Independent vs. Viewpoint Dependent Theories of Object Recognition
  • Theories of Working Memory”
  • Inductive Reasoning: Similarity-based or Theory-based?
  • “Categorization: Prototypes vs. Exemplar models”

This is all pretty standard graduate seminar material. But I’m thinking of updating some topics and in particular  updating the way the course is run. I’m not sure the standard format is the best, and the last time I ran a seminar (not this one, but a class on “Concepts and Categories” ). I felt that it was not working well. Discussion did not flow, I talked too much, there was a lack of enthusiasm. This was not due to the students, but I think was a result of my not leading it well, and the limitations of the standard seminar format.

A new format

I’m planning to keep some the topics, but add in new newer ones that could be challenging or debate-worthy. For example

  • The Dual-Process Approach of Thinking
  • What are Mental Representations and Are They Needed for a Theory of Cognition?
  • Cognitive Heuristics: Helpful or Harmful”.
  • Is Vision Strictly Visual: Vision as Action, Evidence from Blind Echolocation, and Visual imagery

Each class would have around 4-5 papers assigned which would be a mix of classic, standard papers, and new perspectives on that issue. If possible, I will assign readings that cover two sides to an issue. In the past, there are usually 10-15 people in people in this seminar, and it meets for three hours, typically two 1:15 sections and  short break.

Here are four possible ways to run a class. I’ve usually used  the first two…Sometimes it works well, sometimes it works less well.

1. The Standard Seminar. Each class is a discussion of the central issues. I provide 3 or 4 central questions (or maybe a brief outline) that students can prepare for. Rather than discuss each paper in detail, we will try to answer the central questions but will use the assigned readings as a guide.

2. Student-led Discussions. Each class can have one student assigned to be the discussion leader. Alternatively, I act as discussion leader, but there is one student assigned to present each paper.

3. The Debate Team. Each class is framed around a debate and there will be two teams. Each team consists of a leader and two panelists. They will face off and the remaining students will asses and judge the winner and provide feedback. A three hour class could have one or more debates. Throughout the course of the class, there will be several opportunities to be on an debate team.

4. The Shark Tank. This format is like a debate, but one team is selling the idea and the other team is an antagonist. Students make a 10 min presentation of the paper, theory, or idea and then defend the ideas in the paper, and the the sharks question and critique the ideas. Sharks decide to accept the idea and invest, or pass. The rest of the class participates by assessing the performance of the sharks an the presenters.

Ideas?

So if you were taking a course like this, which if these formats would you enjoy most? Which format do you think might promote the best mastery of the material, the best engagement? Are there other ways to run a graduate seminar that I have not tried?

Maybe I should use a mix of formats? Some classes will be standard discussion while others will be debates?

I’d welcome and appreciate any suggestions, critiques, and ideas.

Almost no one reads my work. Should I care?

I recently read an article that has been going around social media in which the authors argue that basically no one is reading academic journals. They argue that in order to be heard, and in order to shape policy, professors and academics should be writing Op-Eds.

The article, which I’ve linked to here,  should be read with a few caveats. First of all, the authors suggest that the average academic paper is read in total by about 10 people. They provide no evidence or information about how they arrived at that estimate. Second, they are writing from the standpoint of social science and political science. In other words, the results may not apply to other disciplines. That said, I believe there are many reasons to take their idea seriously.

There are too many articles published every year.

There are so many scientific and academic journals operating right now. For example, the popular journal PLoS ONE  published 31,500  articles in 2013… That’s 86 articles a day.  In 2014, the published even more….33,000 articles. Only one of them was from my lab.  Now I happen to think that this particular article was a really good paper. It was based on my student Rachel’s master’s thesis. But it’s only one of over 30,000 articles that year. According to the statistics on their own site, there were about 1400 views of our article. So far it’s been cited twice.  Is that good? Is that enough?  Should I care? After all, it’s only one paper of many that I have published in the last few years

This is only the tip of the iceberg. As I said, this is one journal. There are other large journals like PLoSONE.  And there are many, many smaller journals with limited output. But still, it’s estimated that this year alone there will be over 2 million articles published.  Even if you assume that within your own field, it’s only a few thousand every year, finding the ones that matter can still be a problem. If you use Google scholar (and I do) to research, you may have noticed that it  it tends to place heavily cited articles at the top of the search. This is good, because it gives you a sense of which articles have had the most impact in the field. This is bad because the first thing you see is the same article that everyone else has cited for the last 20 years. Unless you take the time to adjust your search, you are not going to see any of the new work.

And as if this isn’t problem enough, there have been widely reported problems with the academic publishing world. For example some journals have even had to withdraw articles, many articles, when it was revealed that they were entirely computer-generated gibberish.  There are also hundreds and hundreds of so-called “predatory” journals in which the peer review is nonexistent, standards for publication are very low, and the journals operate solely to make money publishing papers that otherwise wouldn’t be published. You can see a list of these predatory journals here.  Even journals published by well-known companies have had difficulty recently. In some cases, editors have been accused of accepting articles with little or no to review.

Why do we do this?

I cannot speak for other academics, but within my field and for me, the reason is simple. It’s my job. As a professor in a large research institution, part of my job is to carry out scientific research, and publish the scientific research in peer-reviewed journals. Publishing papers in peer-reviewed journals was necessary for me to obtain tenure. It is necessary for me to be able to compete for federal research dollars. Other forms of communication can help in terms of getting the message out, but as it stands now, publishing a popular article, a textbook, a Op-Ed, a newspaper article, or even a popular blog essentially does not count. I might as well be doing that on my own time. Which, I suppose I am.

So in essence, we have a designed and embraced system that rewards publications in one format and does not reward publications in other formats. Unfortunately, the format that is rewarded is one in which almost no one outside of the immediate field will ever read.

What do we do about this?

I am not entirely sure what to do about this, but I do believe that it is a real problem. I’m not suggesting that scientists and academics abandon publishing in academic journals. In fact, I still think that’s exactly where primary research belongs. As long as the peer review is being carried out properly, editors are behaving properly, and editorial standards are high, this is exactly where you want your best work to appear. I also don’t want it to be the case that scientists and academics begin pursuing popular media at the expense of academic publishing.

What I would like to see, however, is an appropriate balance. It might be time for internal performance review committees and promotion and tenure committees to broaden the scope of what counts as scientific, academic, and scholarly contributions. This will provide some incentive for researchers to publicize their work. At the very least, researchers should not be penalized  for attempting to engage the public in their research. A successful research program is one that publishes in different outlets and for different audiences. We do this in my department, we work with the local public library, for example, to engage in popular science topics in psychology.  Our research communication office works long and hard to publicize and promote research. However, much of this is still considered to be secondary.

Another possibility, one that is suggested by the editorial staff at PLoSONE,  is for individual researchers to publicize their own work.  Researchers should share and Tweet their research as well as others.  And there are other formats, Google+, for example hosts a large scientific community that publicizes research, shares research, and even organizes virtual conferences. I’ve taken part in some of these, and they can be an effective way to share your work.

In the end, I wonder if we should all slow down, work more carefully, and think long and hard about the quality of our research versus the quantity of our publication output.  Otherwise, I think there is a real concern that the signal will be completely drowned out by the noise.

Grade Inflation at the University Level

I probably give out too many As. I am aware of this, so I may be part of the problem of grade inflation. Grade inflation has been a complaint in universities probably as long as there have been grades and as long as there have been universities.

Harvard students receive mostly As.

But the issue has been in the news recently. For example, a recent story asserted that the most frequent grade (e.i. the modal grade) at Harvard was an A. That seems a bit much. If Harvard is generally regarded as of the world’s best universities, you would think they would be able to asses their students on a better range. A great Harvard undergrad should be a rare thing, and should be much better than the average Harvard undergrad. Evidently, all Harvard undergrads are great.

One long time faculty member, says that “in recent years, he himself has taken to giving students two grades: one that shows up on their transcript and one he believes they actually deserve….“I didn’t want my students to be punished by being the only ones to suffer for getting an accurate grade,”

In this way, students know what their true grade is, but they also get a Harvard grade that will be an A so that they look good and that Harvard looks good. It’s not just Harvard, of course. This website, gradeinflation.com, lays out all details. Grades are going up everywhere…But student performance may not be.

The University is business and As are what we make.

From my perspective as a university professor, I see the pressure from all sides, and I think the primary motivating force is the degree to which universities have heavily embraced a consumer-driven model. An article The Atlantic this week got me thinking about it even more. The article points out, we (university) benefit when more students are doing well and earning scholarships. One way to make sure they can earn scholarships is to keep the grades high. It is to our benefit to have more students earning awards and scholarships.

In other words, students with As bring in money. Students with Cs do not. But this suggests that real performance assessment and knowledge mastery is subservient to cash inflow. I’m probably not the only one who feels that suggestion is true.

And of course, students, realizing they are the consumer, sort of expect a good grade for what they pay for. They get the message we are sending. Grades matter more than knowledge acquisition. Money matters more than knowledge. If they pay their tuition and fees on time, they kind of expect a good grade in return. They will occasional cheat to obtain these grades. In this context, cheating is economically rational, albeit unethical.

Is there a better system?

I am not sure what to do about this. I’m pretty sure that my giving out more Cs is not the answer, unless all universities did this. I wonder if we really even need grades? Perhaps a better system would be a simple pass/fail? Or Fail/Pass/Exceed (three way). This would suggest that students have mastered the objectives in the course and we (the University) can confidently stand behind our degree programs and say that our graduates have acquired the requisite knowledge. Is that not our mission? Does it matter to an employer if a student received an A or a B in French? Can they even use that as a metric when A is the modal grade? The employer needs to know that the student mastered the objectives for a French class and can speak French. Of course, this means that it might be tricky for graduate and professional schools to determine admission. How will medical schools know who admit if they do not have a list of students with As? Though if most students are earning As, it renders moot that point.

In the end, students, faculty, and university administrators are all partially responsible for the problem, and there is no clear solution. And lurking behind it, as is so often the case, is money.

A Computer Science Approach to Linguistic Archeology and Forensic Science

Last week (Sept 2014),  I heard a story on NPR’s morning edition that really got me thinking…(side note, I’m in Ontario so there is no NPR but my favourite station is WKSU via TuneIn radio on my smart phone). It was a short story, but I thought it was one the most interesting I’ve heard in last few months, and it got me thinking about how computer science has been used to understand natural language cognition.

Linguistic Archeology

Here is a link to the actual story (with transcript). MIT computer scientist Boris Katz realized that when people learn English as second language, they make certain errors that are a function of their native language (e.g. native Russian speakers leave out articles in English). This is not a novel finding, people have known this. Katz, by the way, is one of many scientists that worked with Watson, the IBM computer that competed on jeopardy

Katz trained a computer model to learn from samples of English text productions such that it could detect the writer’s native language based on errors in their written English text. But the model also learned to determine similarities among other native languages. The model discovered, based on errors in English, that Polish and Russian have historical overlap. In short, the model was able to determine the well know linguistic family tree among many natural languages.

The next step is to use the model to uncover new things about dying or languages. As Katz says

But if those dying languages have left traces in the brains of some of those speakers and those traces show up in the mistakes those speakers make when they’re speaking and writing in English, we can use the errors to learn something about those disappearing languages.”

Computational Linguistic Forensics

This is only one example. Another one that fascinated me was the work of Ian Lancashire, an English professor at the University of Toronto and Graeme Hirst, a professor in the computer science department. The noticed that the output of Agatha Christie—she wrote around 80 novels, and many short stories— declined in quality in her later years. That itself is not surprising, but they thought there was a pattern. After digitizing her work, they analyzed the technical quality of her output and found richness of her vocabulary fell by one-fifth between the earliest two works and the final two works. That, and other patterns, are more consistent with Alzheimer’s than normal aging. In short, they are tentatively diagnosing Christie with Alzheimer disease, based on her written work. You can read a summary HERE and you can read the actual paper HERE.  It’s really cool work.

Text Analysis at Large

I think this work is really fascinating and exciting. It highlights just how much can be understood via text analysis. Some of the this is already commonplace. We educators rely on software to detect plagiarism. Facebook and Google are using these tools as well. One assumes that the NSA might be able to rely on many of these same ideas to infer and predict information and characteristics about the author of some set of written statements. And if a computer can detect a person’s linguistic origin from English textual errors, I’d imagine it can be trained to mimic the same effects and produce English that looks like  it was written by a native speaker of another language…but was not. That’s slightly unnerving…

The Fine Print in the Syllabus

The end of July brings the realization that that I’ll be teaching graduate and undergraduate courses again in the fall, and that I need to prepare readings, lectures, and an official course outline for each course. In addition to being distributed to students on the first day of class, these outlines are archived and publicly available on the web. For example, here is the outline for the summer distance course that I am teaching this year . Here is the outline from the last time I taught the Introduction to Cognition course. My graduate courses use a similar format, and here is the outline from last fall’s graduate seminar on cognition. As you can see, there is a lot of information about the course, but also a lot of slightly silly stuff directing them to websites about other policies.

Fine Print

Every year, when I send these course outlines to the department’s undergraduate coordinator, I am informed that I have used the wrong template or have forgotten something.

For example. Last year, I forgot this:

“Computer-marked multiple-choice tests and/or exams may be subject to submission for similarity review by software that will check for unusual coincidences in answer patterns that may indicate cheating.”

Do the students need to know this up front? Is it not enough that we tell them not to cheat? Can they file an appeal if they were caught cheating and did not know that I was going to check ?

Not So Fine Print

Every year the list of non academic information that is required gets longer and longer. For example, this year I forgot to include a mental health statement. According to the university, I need to include the following statement in all course outlines:

“If you or someone you know is experiencing emotional /mental distress, there are several resources here at Western to assist you.  Please visit:  http://www.uwo.ca/uwocom/mentalhealth/ for more information on these resources and on mental health.”

I think this is a very strange thing to have in a course outline. It has nothing to do with my class. Surely students already know about non academic services, like mental health services?  And why stop there, maybe I should also consider a referral to the student health services if they or someone they know is experiencing a pain in their foot? Or to the gym if they are experiencing weakness in the upper torso? Or to a cooking class if they are malnourished. We have not yet been asked to issue “trigger warnings”  but I know that’s probably coming…

What is the intent here? I’m not suggesting that student not be informed of all the options available to them in terms of university life. I just wonder how relevant it is to the course outline. I id not think this kind of information belongs in my course outline.

Is it about control?

I think much of this is about the university exerting top down control. Requiring a series of statements for each course outline is a subtle power play. Academics sometimes like feel immune to the “TPS report” mentality, but we get it, and it gets worse each year.

In 2003, when I began teaching at Western, I created a syllabus, handed it out, taught the class, turned in the grades. Now, 11 years later, I use information from an official template for the syllabus, I send it for “approval” by the undergraduate office (it might be sent back), I sent it to IT to be posted, I  teach the course, I approve alternative exam dates at the request of the academic counsellor, when I turn in the grades, these are checked also to make sure they are not too high or too low. Ten years ago we had a chair… Now we have a chair plus 2 1/2 associate chairs (I was one of them for 4 years). Ten years ago, departments ran nearly every aspect of their own graduate programs, Now we have a central authority that has control over how exams are run, the thesis, and even the specific offer of admission. The letter that we write to students to offer admission to our graduate program is from a template, and any changes must to be approved. This letter gets longer and more confusing each year. It’s our TPS report. One of many TPS reports.

University, Inc.

The university is a business. I know it, everyone knows it.

Every year, we are informed  that we need to meet targets for enrolment, to put “bums in seats”. We are required to continually be seeking external funding and grants, to teach courses that will have appeal to student registered in different programs, to attach more graduate students. We’ve been asked to “sex up” the title of a course to see if more people will sign up. We now need to report on “internationalization” activities. That’s a buzzword, folks. We’re doing buzzword reports.

I’m not naive, I know the pressures. I’m just disappointed. And worried that it’s getting worse each year.

 

Canada’s Scientific Priorities are Heading in the Wrong Direction

The funding of basic research in Canada is still healthy, but I think we may be heading in the wrong direction. In short, we’re still funding basic research, but it is less basic, and a greater proportion of the funding seems to go to those projects with explicit commercial ends.

A recent article in theNew York Times  highlighted the increasing unhappiness within the scientific community about the Canadian government’s recent shift from funding basic research to funding industry partnerships. This can have serious consequences, because  the government can be in the position to fund certain industries over others. One of the scientists who was interviewed for the NYT piece is Dr. Hüner, a Canada Research Chair in environmental-stress biology from my own university (The University of Western Ontario). His research studies how plants deal with stress and suboptimal conditions. His NSERC grant was cut by more than 50%, leading to job losses from his lab and, most importantly, a reduction in his research on environmental responses to climate change.

This kind of cut is one of the things that really troubles me. It appears that the Harper government is selectively cutting programs that deal with climate change and toxic waste effects. There have been other recent stories about closed science libraries and closure of the Experimental Lakes Area. It is difficult to avoid drawing the conclusions that Prime Minister Harper is trying to defund these programs more than others.

I should say that I am not against funding these industry partnerships (so NSERC, please do not cut my grant, I am still so very grateful!) Indeed, one of my own postdocs benefits from the Mitacs program which was specifically created to promote partnerships with industry. I just worry that the funding reallocations are politically motivated, or at least, they can appear to be political.

Canada (and even my university) overtly claim to want to be on the “world stage”.  And the world (or at least the NYT) is noticing the increasing unhappiness with our research funding allocations.

%d bloggers like this: