River Water

We are like rivers.

I’ve been reading a lot about privilege, gender, and colonization. I will not even try to pretend to be an expert in this area. I was thinking about how I might be unaware of my own life, its privilege. The role of luck and chance. The following metaphor / parable is what I came up with. It’s a bit if a clumsy analogy, but I thought it worked on a simple level for me.

A river flows in the direction that it flows because of many things. Although some rivers are fast, or slow, or deep, or wide, they are all made of the same water. A river is nothing more than the water that flows along a course that was created by the water that came before it. The water that created the channel, the water that created the canyon, even the water that is downstream, pulling the river along its course. The river doesn’t know this. It cannot know the struggles of the earlier river-water that moved the rocks. It cannot know the ease with which the earlier river-water flowed down an unobstructed path. It cannot know if the earlier river-water was damned or if a melting glacier helped the earlier river-water to speed its course and deepen its channel. It cannot know that all rivers eventually stop flowing and that all river-water becomes part of the same sea.

All the river can know is it that it is flowing now: flowing quickly or flowing slowly; constrained or unconstrained, oblivious to its own history even as its present course and identity is shaped its history.

We are like rivers in this way. We flow along in our lives, making progress, confronting obstacles, and not always knowing the full context of our our life course.

Better understanding

But we can try to know. Even as we try to live in the present, we can try to understand how the past shaped the channels and canyons of our life-course. We can see how our current circumstances might make it easier or more difficult depending on the obstacles that previous generations faced. We are beneficiaries to the sometimes arbitrary circumstances that favoured or did not favour those who came before us. Those of us whose lives flow though clear cut channels may not always realize that we’re travelling a path with fewer obstacles, because those obstacles were removed long before us. We receive these benefits, earned or unearned, aware, or unaware.  But people whose paths are constrained or obstructed are all too aware of the impedance.

But we’re all the same river-water, flowing to the same sea. But we would do well to be aware of our privilege and to understand that we may not all have the same course to travel…but we still have to travel to the same place.

A Curated Reading List

Fact: I do not read enough of the literature any more. I don’t really read anything. I read manuscripts that I am reviewing, but that’s not really sufficient to stay abreast of the field. I assign readings for classes, to grad students, and trainees and we may discuss current trends. This is great for lab, but for me the effect is something like me saying to my lab “read this and tell me what happened”. And I read twitter.

But I always have a list of things I want to read. What better way to work through these papers than to blog about them, right?

So this the first instalment of “Paul’s Curated Reading List”. I’m going to focus on cognitive science approaches to categorization and classification behaviour. That is my primary field, and the one I most want to stay abreast of. In each instalment, I’ll pick a paper that was published in the last few months, a preprint, or a classic. I’ll read it, summarize it, and critique. I’m not looking to go after anyone or promote anyone. I just want to stay up to date. I’ll have a new instalment on a regular basis (once every other week, once a month, etc.). I’m doing this for me.

So without further introduction, here is Reading List Item #1…

Smith, J. D., Jamani, S., Boomer, J., & Church, B. A. (2018). One-back reinforcement dissociates implicit-procedural and explicit-declarative category learning. Memory & Cognition,46(2), 261–273.


This paper was published on line last fall but was just published officially in Feb of 2018. I came across it this morning and I was looking at the “Table of Contents” email from Memory & Cognition. Full disclosure, the first author was my grad advisor from 1995-2000, though we have’t collaborated since then (save for a chapter). He’s now at Georgia State and has done a lot of fascinating work on metacognition in non-human primates.

The article describes a single study on classification/category learning. The authors are working within a multiple systems approach of category learning. According to this framework, a verbally-mediated, explicit system learns categories by trying to abstract use a rule, and a procedurally-mediated, implicit system learns categories by Stimulus Response (S-R) association. Both systems have well-specified neural underpinnings. These two systems work together but sometimes they are in competition. I know this theory well and have published quite of few papers on the topic. So of course, I wanted to read this one.

A common paradigm in this field is to introduce a manipulation that is predicted to impair or enhance one of the systems and leave the other unharmed in order to create a behavioural dissociation. The interference in this paper was a 1-back feedback manipulation. In one condition, participants received feedback right after their decision and in another, they received feedback about their decision on the previous trial. Smith et al. reasoned that the feedback delay would disrupt the S-R learning mechanism of the procedural/implicit system, because it would interfere with the temporal congruity stimulus and response. It should have less of an effect on the explicit system, since learners can use working memory to verbalize the rule they used and the response they made.


In the experiment, Smith et al. taught people to classify a large set (480) of visual stimuli that varied along two perceptual dimensions into two categories. You get 480 trials, and on each trial you see shape, make a decision, get feedback, and see another shape, and so on. The stimuli themselves are rectangles that vary in terms of size (dimension 1) and pixel density (dimension two). The figure below shows examples of range. There was no fixed set of exemplars, but “each participant received his or her own sample of randomly selected category exemplars appropriate to the assigned task”.


They used a 2 by 2 design with two between-subject factors. The first factor was category set. Participants learned either a rule based category (RB) in which a single dimension (size or density) creates an easily-verbalized rule, or an information integration category (II) in which both dimensions need to be integrated at a pre decisional stage. This II category can’t be learned very easily by a verbal rule and many studies have suggested it’s being learned by the procedural system. The figure below shows how many hundreds of individual exemplars would be divided into two categories for each of the each of the category sets (RB and II).


The second factor was feedback. After each decision, you either received feedback right after you made a decisions (0Back) or you received feedback one trial later (1Back). This creates a heavier task demand, so it should make it harder to learn the RB categories at first because the task creates a heavier working memory load. But it should interfere with II learning by the procedural system because the 1-Back disturbs the S-R association.


So what did they find? The learning data are plotted below and suggest that the 1Back feedback made it harder to learn the RB categories at first, and seemed to hurt the II categories at the end. The 3-way ANOVA (Category X Feedback X Block) provided evidence to that effect, but it’s not an overwhelming effect. Smith et al.’s decision to focus a follow up analysis on the final block was not very convincing. Essentially, they compared means and 95% CIs for the final block for each of the four cells and found that performance in the two RB conditions did not differ, but performance in the two II conditions did. Does that mean that the feedback was disrupting the feedback? I’m not sure. Maybe participants in that condition (II-1Back) were just getting weary of a very demanding task. A visual inspection of the data seems to support that alternative conclusion as well.  Exploring the linear trends might have been a stronger approach.


The second analysis was a bit more convincing. They fit each subject’s data with a rule model and an II model. Each model tried to account for each subject’s final 100 trials. This is pretty easy to do and you are just looking to see which model provides the most likely account of the data. You can then plot the best fitting model. For subjects who learned the RB category, the optimal rule should be the vertical partition and for the II category, the optimal model is the diagonal partition.

As seen the figure below, the feedback did not change the strategy very much for subjects who learned the RB categories. Panel (a) and (b) show that the best-fitting model was usually a rule based one (the vertical partition). The story is different for subjects learning II categories. First, there is way more variation in best fitting model. Second, very few subjects in the 1-back condition (d) show evidence of the using the optimal rule (the diagonal partition).



Smith et al concluded: “We predicted that 1-Back reinforcement would disable associative, reinforcement-driven learning and the II category-learning processes that depend on it. This disabling seems to have been complete” But that’s a strong conclusion. Too strong. Based on the modelling, the more measured conclusion seems to be that about 7-8 of the 30 subjects in the II-0Back condition learned the optimal rule (the diagonal) compared to about 1 subject in the II-1Back. Maybe a just handful of keener’s ended up in the II-0Back and learned the complex structure? It’s not easy to say. There is some evidence in favour of Smith et al’s conclusion but its not at all clear.

I still enjoyed reading the paper. The task design is clever, and the predictions flow logically from the theory (which is very important).It’s incremental work. It adds to the literature on the multiple systems theory but does not (in my opinion) rule out a single-system approach. But I wish they had done a second study as an internal replication to explore the stability of the result. Or maybe a second study with the same category structure but different stimuli. It’s incremental work. It adds to the literature on the multiple systems theory but does not (in my opinion) rule out a single-system approach.

Tune in a few weeks for the next instalment. Follow my blog if you like and check the tag to see the full list. As the list grows, I may create a better structure for these, too.


Presidential Power Pose

The president at work

As much as I don’t want to write about US presidential politics, I was struck by a photograph that was released officially by the Office of the White House of the president hard at work during the government shutdown. As you can see, it is a staged photograph of the president sitting in the oval office at his desk on the phone. The photo has been mocked on line, but I’m not really interested in mocking any more.

The president seem small and ill-at-ease in this official photo.

The first thing that struck me, was how small he looked. I am not a fan of the current US president, but he never struck me as a small person. In fact, many people commented during the 2016 election on his body language.

During the campaign

In the following picture, one that has also been seen by millions of people, candidate Trump is seen glowering and looming over candidate Hillary Clinton.  He appears aggressive, ready to attack (not in a good way).

Campaign 2016 Debate (5)
Candidate Trump looms and glowers over Hillary Clinton.

In other debates in appearances, he commanded attention. During the Republican convention, I even commented to friends that I thought he was going to win the election. He stood up there, absorbing the crowd energy, and fully in control of the vibe (so much so that I felt uneasy for days afterward). In other campaign rallies, for better or worse, he commanded attention. An attribute no doubt honed and developed in the aggressive world of NY/international real estate development, casinos, pageants promotion, and reality TV. You don’t have to be a fan is his to notice this.

But in the “at work at the desk” photo, he seem so very small. Much smaller than his actual size (6’2” or 6’3” depending on who you believe). The desk is too big for him, too consequential. Even the hat seems too large. He appears to be diminished. I don’t think you need to oppose the president to notice this. He really does seem to be making himself smaller, or is unable to make himself appear big enough.

Body language never lies

Body language is a fascinating subject, it’s the domain of ethnologists , comparative psychologists, and social psychologists. Our body language often conveys things that may be at odds with spoken language. It often gives away something that we may wish to conceal. Our body language is the link to the more primitive self. The inner ape that often is concealed and covered over by culture, language, and society. In the president’s photograph, the body language reveals a man who does not belong, who is out of place, and possible knows he is too small for the role.

Unlike the president’s spoken language, body language doesn’t lie.

The Creation Myth and Fear of Resting

Western Sunrise
The sunrise at Western University, as seen from my 7th floor lab.

November 30, 2017

I feel very unfocused lately, and I think I know why. When I was writing my two grant proposals in October, I really felt like I had control of my ideas. I felt like I knew what I was working on and what I wanted to be doing with my research program, my graduate student, and trainees. This is a great feeling and I was filled with the satisfaction of not only working hard on the proposals but also of having so many ideas and projects that I wanted to pursue. I could not wait to get started on some of the new projects.

But right after they were submitted, I rested. This seems natural, for course, for I’d worked hard and wanted to celebrate a job well done and relax a bit. Also I had just undergone a minor surgery, so some recovery time was needed. But a week later, I needed to turn to other things that Required my attention, and before I realized what had happened, I was overwhelmed with our departmental job searches and my office and lab’s move to the new building. My research ideas, having been developed and nurtured in the NSERC and SSHRC proposals, languished from the inattention.

That is, I worked. I seemed to have it together, I rested, and it all seemed to slip away.

It was like the 7th day.

In the creation myth in Genesis, God worked hard to create the universe and then he rested. And then right after that, right after sitting back, looking with satisfaction at what he’d done, and cracking open a divine beer, he seems to lose focus…humans took over, they started killing each other, and he can’t really seem remember why he created us in the first place, or what his plan is. He takes it out on us. He starts to clearly resent his work…he keeps coming back to it every so often, but the magic is gone. He rested and lost focus.

I think this is a metaphor that is often unexplored in the Bible (or maybe it is interpreted this way, I’m really not up on Bible scholarship). The creation myth can be seen as a story about what happens when you rest on your laurels and stop working on something. You step back and get caught up in other things and you lose you train of thought. The ideas fade, they take a back seat, and it can be so difficult to get back in control, that you risk starting to resent the ideas.

I think that’s the underlying theme in Genesis: God rested and the universe took a back seat. It got out of hand and he never quite got it back the way he wanted. He started to resent the work and even tried to destroy it.

The inevitability of forward motion.

I’m not trying to say I’m God here, but I am supposed to be in control of my research program. And there are times when I’m in the middle of working on a project, or paper, or grant that I really think I can see the big picture. I can glimpse a bigger vision for my research on cognition, concepts, and categories. I think I’ve created something worthwhile. But damnit, if I step way for a week and get caught up in a PhD defence, or faculty hiring, committee work, or the like, it can be so hard to put things back together.

And the lesson in Genesis seems to be: you can’t. You can’t put it back in that pristine state. But you can’t give up either. You have to let the ideas work themselves out. You have to come back and not be afraid to admit you made a mistake. Sometimes you start over or learning new skills. You may have to look at things from a new perspective while realizing that you can’t ever get back to the garden.

I’m not a religious believer… but I think there’s still a good lesson here: Even the divine creator has trouble keeping it together after a break.

The Infinity: Email Management and Engagement

It’s a cold and rainy Sunday morning in November. I’m drinking some delicious dark coffee from Balzac’s.

My wife and I are each working on different things and taking advantage of the relative morning quiet. I’m at the kitchen table working off my laptop, listening to music on my headphones, and working on overview material: looking at the emails I have to respond to. I criticize myself for procrastinating, which is in itself an extra layer of procrastinating.

Email is the engine of misery

I take a look at my work email inbox. It is not too bad for a professor. I keep it organized and the inbox contains one those things that need a reply. But there at 59 messages in there that I need to reply to;  four of these have been awaiting a reply since September. Even while I write this, I’m feeling a real sense of anxiety and conflict. On the one hand, I greatly desire to spend hours slogging though the entire list and trying to deal with backlog. I’d love to look at INBOX = 0. I think that would make me feel great (which is a strange belief to have…I have never had INBOX = 0, so how do I know it would make me feel great?) Even an hour could make a good dent and dispense with at least 2/3 of the messages.

But at the same time, I want to ignore all of it. To delete all the email. I think about Donald Knuth’s quote about email. Knuth is a computer scientist at Stanford, who developed, among other things, the “TeX” system of typesetting. He has an entry on his website about email and indicted that he does not have an address.

“Email is a wonderful thing for people whose role in life is to be on top of things. But not for me; my role is to be on the bottom of things. What I do takes long hours of studying and uninterruptible concentration. I try to learn certain areas of computer science exhaustively; then I try to digest that knowledge into a form that is accessible to people who don’t have time for such study.”

This quote, and the idea here, has been one of the things that I really aspire to. It’s one of my favourite quotes and a guiding principle…but I can’t make the leap. Like Knuth, I also write books, articles, and I try to get to the bottom of things. But it seems like I never scratch the surface because I’m always responding to email, sending email, Tweeting and engaging on social media. Deeper analysis never happens because I’m preoccupied with this surface. I feel trapped by this.

And yet, I cannot ignore the surface level. Engagement with email is part of my job. Others depend my responding. For example, I have a now retried departmental colleague who just never responded to email, and this was very frustrating to deal with. I suspect (I know) that others picked up the slack when he failed to be responsive. I have a current colleague who is much the same. So I don’t endorse blowing off some aspects of one’s job, knowing that others will pick these pieces up. I don’t want to shirk my administrative and teaching responsibilities, even if it means I sacrifice the ability to have dedicated research and writing time.

Give and Take

In the end, I am trapped in a cage that I spend hours each day making stronger. Trapped in a pit that I work ever longer hours to make deeper. The incoming email will not stop, but one could probably slow it down by not sending any email out, by providing FAQs on my syllabus about when to email, by delegating email to TAs.

The real question is, if I give less time to email, will it take less of my time away? If so, will I use that time wisely? Or will I turn to another form of distraction. Is email the problem? Or am I the problem?

The one “productivity hack” that you probably avoid like the plague

This past week, my office phone rang. It rarely does, and even when it does ring, I rarely answer it. I usually let the call go to voicemail. My outgoing voicemail says: “I never check voicemail, please email me“. And even if someone does leave a voicemail, it’s transcribed and sent to my email.

This is so inefficient, and self-sabotaging, given that I, like most academics, moan about how much email I have to process.

But this time, noticing that it was another professor in my department who was calling, I picked it up. My colleague who was calling is the co-ordinator for the Psychology Honours program and he had a simple question about the project that one of my undergraduate Honours students was working on. We solved the issue in about 45 seconds.

If I had followed the standard protocol, he would have left a voicemail or emailed me (or both). It would have probably taken me a day to respond, and the email would have taken 5-8 minutes for me to write. He’d have then replied (a day later), and if my email was not clear, there might have been another email. Picking up the phone saved each of us time and effort and allowed my student’s proposed project to proceed.

Phone Aversion.

Why are we so averse to using the phone?

I understand why in principal: it’s intrusive, it takes you out of what you were doing (answering email probably), and you have to switch tasks.  The act of having to disengage from what you are doing and manage a call is a cognitively demanding action.  After the call, you then have to switch back. So it’s natural to make a prospective judgement to avoid taking the call.

And from the perspective of the caller, you might call and not get an answer, then you have to engage a new decision-making process: Should I leave a message, call again, or just email. This cognitive switching takes time and effort. And of course, as many of us resent being interrupted by a call, we may also assume that the person we are calling also resents the interruption and so we avoid calling out of politeness (maybe this more of a Canadian thing…)

So there are legitimate, cognitive and social/cognitive reasons to avoid using the phone.

We Should Make and Take More Calls.

My experience was a small revelation, though. Mostly because after the call, while I was switching back to what I had been doing prior, I thought about how much longer (days) the standard email approach would have taken. So I decided that, going forward, that I’m going to try to make and take more calls. It can be a personal experiment.

I tried this approach a few days ago with some non-university contacts (for the youth sports league I help to manage). We saved time and effort. Yes, it might have taken a few minutes out of each other’s day, but it paled in comparison to what an email-based approach would have taken.

For Further Study

Although I’m running a “personal experiment” on phone call efficiency, I’d kind of like to study this in more detail. Perhaps design a series of experiments in which two (or more) people are given a complex problem to solve and we can manipulate how much time they can spend on email vs time on the phone. We’d track things like cognitive interference. I’m not exactly sure how to do this, but I’d like to look at it more systematically. The key things would be how effectively people solve the problems, and if and how one mode of communication interferes with other tasks.

Final Thoughts

Do you prefer email or a phone call? Have you ever solved a problem faster on the phone vs email? Have you ever found the reverse to be true?

Or do you prefer messaging (Slack, Google Chat, etc.) which is more dynamic than email but not as intrusive as a phone call?




Artificially Intelligent—At the Intersection of Bots, Equity, and Innovation

This article was written in collaboration with my wife Elizabeth. We wrote this together and the ideas were generated during some of the great discussions we had during our evening 5k runs.

We all remember Prime Minister Trudeau’s famous response when asked about his gender equity promise for filling roles in the cabinet: “because it’s 2015.” And really, this call to action comes quite late in the historical span of modernity, but we’re glad someone at the highest levels of government in a developed nation has strongly proclaimed it. Most of us in Canada and likely around the world, were pleased to see Trudeau had staffed his cabinet with a significant amount of female leaders in important decision-making roles. And now, it’s 2017–a year that has been pivotal to say the least. Last Spring, Canada’s Minister of Science, Dr. Kirsty Duncan announced that universities in Canada are now required to improve their processes for hiring Canada Research Chairs and ensure those practices and review plans  are equitable, diverse and inclusive. The government of Canada’s announcement is a call to action to include more women and other underrepresented groups at these levels, and it’s essentially come down to ultimatum: research universities will simply not receive federal funding allocations for these programs unless they take equity, diversity, and inclusion seriously in their recruitment and review processes.

IMG_20161205_072820602When placed under the spotlight, the situation is a national embarrassment. Currently there is one woman Canada Excellence in Research Chair in this country and for women entrepreneurs the statistics are not much better. Women innovators in the industrial or entrepreneurial sphere are often left hanging without a financial net, largely as a result of a lack of overall support in business environments and major gaps in policy and funding. The good news is that change is happening now, and it’s affecting policies and practices at basic funding and policy levels. Federal and Provincial research granting agencies in Canada are actively responding to the call for more equitable and inclusive review practices within the majority their programs. The message is clear from the current Canadian government: get on board with your EDI policies and practices, or your boat won’t leave the harbour. But there’s always more work to be done.

The Robot Revolution

Combined with our pivotal political moment in history and on-going necessity for a level playing field for underrepresented groups, humans are situated at a crossroads of theory and praxis of human-machine interaction. The current intersection of human and machine certainly has critical implications for the academy, innovation, and our workplaces. It exposes the gaps to see what is possible, and we know the tools are here and must be harnessed for change. Even though we are literally living through mini “revolutions” each day as new technologies, platforms and code stream before our very eyes, humanity has been standing at this major intersection for a couple of centuries or more–at the very least, since the advent of non-human technologies that help humans process information and communicate ideas (cave paintings, the book, the typewriter, Herb Simon’s General Problem Solver). The human-AI link we need to critically assess now however, is how this convergence of the human-machine can work for women and underrepresented groups in the academy and entrepreneurial sectors in powerful ways. When it comes to creating more equitable spaces and providing women with the pay they deserve, we need to move beyond gloomy statements like “the robots are taking our jobs.” We must seek to understand how underrepresented and underpaid people can benefit from robots rather than running from them. And we must seek to understand why women in the academy, industry and other sectors haven’t been using the AI tools in dynamic ways all along. [Some are of course. As evidenced here. Two women business owners harnessed the power of technology to grow their client and customer base by sending emails from a fictional business partner named “Keith.” Client response to “Keith” seemed to do the trick in getting their customers and backers to take them seriously.]

Implicit Bias

In the psychology of decision making, a bias is usually defined as tendency to make decisions in a particular way. In many cases, the bias can he helpful and adaptive: we all have a bias to avoid painful situations. In other cases the bias can lead us to ignore information that would result in a better decision. An implicit bias refers to a bias that we are unaware of or the unconscious application of a bias that we are aware of. The construct has been investigated in how people apply stereotypes. For example, if you instinctively cross the street to avoid walking past a person of a different race or ethnic group, you are letting an implicit bias direct your behaviours. If you instinctively tend to doubt that a woman who takes a sick day is really sick, but tend to believe the same of a man, you are letting an implicit bias direct your behaviours. Implicit bias has been shown to also affect hiring decisions, teaching evaluations. Grants that are submitted by women scientists often receive lower scores and implicit bias is the most likely culprit.  Implicit bias is difficult to avoid because it is implicit. The effect occurs without us being aware of it happening. We can overcome these biases if we are able to be more aware that they are happening. But AI also offers a possible way to overcome these biases as well.

An Engine for Equity at Work

AI and fast-evolving technologies can and should be used by women right now. We need to understand how they can be harnessed to create balanced workplaces, generate opportunity in business, and improve how we make decisions that directly affect women’s advancement and recognition in the academe. What promise or usefulness do AI tools hold for the development of balanced and inclusive forms of governance, review panel practices, opportunities for career advancement and recognition, and funding for start-ups? How can we use the power of these potent and disruptive technologies to improve processes and structures in the academy and elsewhere to make them more equitable and inclusive of all voices? There’s no denying that the tech space is changing things rapidly, but what is most useful to us now for correcting or improving imbalances or fixing inequitable, crumbling, and un-useful patriarchal structures. We need a map to navigate the  intersection of rapid tech development and human-machine interaction and use AI effectively to reduce cognitive and unconscious biases in our decision-making; to improve the way we conduct and promote academic research, innovation and governance for women and underrepresented groups of people.


Some forward thinking companies are using the approach now. For example, several startups are using AI to prescreen candidates for possible interviews. In one case, the software (Talent Sonar) structured interviews and extracts candidate qualifications and removes candidate’s names and gender information from the report. These algorithms are designed to help remove implicit bias in hiring by focusing on the candidate’s attributes and workplace competencies without any reference to gender. Companies relying on these kinds of AI algorithms report a notable increase in hiring women. Artificial Intelligence, far from replacing workers, is actually helping to diversify and improve the modern workforce.

Academics have seen this change coming. Donna Haraway, in her Cyborg Manifesto re-conceptualizes modern feminist theory through a radical critique of the relationship between biology, gender, and cybernetics. For Haraway, a focus on the cybernetic–or the artificially intelligent–removes the reliance on gender in changing the way we think about power and how we make decisions about what a person has achieved, or is capable of doing. Can we, for example, start to aggressively incorporate AI methods for removing implicit or explicit bias from grant review panels–or more radically, remove humans from the process entirely? When governing boards place their votes for who will sit on the next Board of Trustees, or when university review committees adjudicate a female colleague’s tenure file in the academy, could this not be done via AI mechanisms or with an application that eliminates gender and uses keyword recognition for assessing the criteria? When we use AI to improve our decision making, we also have the ability to make it more equitable, diverse and inclusive. We can remove implicit or explicit cognitive biases based on gender or orientation, for example, when we are deciding who will be included in the next prestigious cohort of Canada Research Chairs.

AI can, and will continue to change the way human work is recognized in progressive ways: recognition of alternative work during parental leaves, improved governance and funding models, construction of equitable budgets and policy, and enhanced support for women entrepreneurs and innovators. AI is genderless. It is non-hierarchical. It has the power to be tossed like a dynamite stick to disrupt ancient academic structures that inherently favour patriarchal models for advancing up the tenure track. Equalization via AI gives women and underrepresented groups the power to be fully recognized and supported, from the seeds of their innovation (the academy) to the mobilization of those ideas in entrepreneurial spaces. The  robots are in fact still working for us–at least, for now.