Tag Archives: cognitive science

The Scientific Workflow

The Minda Lab

When new trainees enter into your lab, do you have a plan or a guide for them? I have a lab manual that explains roles and responsibilities, but I did not (until now) have a guide for how we do things. I wrote this to help my own trainees after a lab meeting last week where we discussed ideas around managing our projects. It started as a simple list, and I’m now making it part of my lab manual. 

So this is my guide for carrying out cognitive psychology and cognitive science research in my lab. The workflow is specific to my lab, but can be adapted. If you think this is helpful, please feel free to share and adapt for your own use. You can keep this workflow in mind when you are planning, conducting, analyzing, and interpreting scientific work. You may notice two themes that seem to run throughout the plan: documenting and sharing. That’s the take home message: Document everything you do and share your work for feedback (with the group, your peers, the field, and the public). Not every project will follow this outline, but most will. 

Theory & Reading

The first step is theory development and understanding the relationship of our work to the relevant literature. We’re involved in cognitive science and develop and test theories about how the mind forms concepts and categories. We should work from two primary theories. Prototype / exemplar theory, which deal with category representations, and the multiple systems theory which addresses the category learning process and rule use. You can keep up with developments using Google Scholar alerts and recommendations.

We want to test the assumptions of these theories, understand what they predict, test their limitations and contrast with alternative accounts. We’re going to design experiments that help understand the theory, the models, and make refinements and/or reject some aspects of our theorization.

  • Use Google Scholar to find updates that are important for your research.
  • Save papers in Paperpile and annotate as needed.
  • Document your work in Google Docs.
  • Share interesting papers and preprints in the relevant channel in Slack.

Hypotheses Generation

Hypotheses are generated to test assumptions and aspects of the theory and to test predictions of other theories. The hypothesis is a formal statement of something that can be tested experimentally and these often arise from more general “research questions” which are broad statements about what you interested in or trying to discover. You might arrive at a research question or an idea while reading a paper, at a conference, while thinking about an observation you made, or by brainstorming in an informal group or lab meeting. Notice that all of these assume you that put in some time and effort to understanding the theory and then allow some time to work over ideas in your mind, on paper, or in a computer simulation.

  • Work on hypothesis generation in lab meetings, our advisory meetings, and on your own.
  • Document your work and ideas in Google Docs (or your own notes).
  • Share insights in lab meetings and in the relevant channel in Slack.

Design study/experiment

Concurrent with hypothesis generation is experimental design. We are designing experiments to test hypotheses about category representation and learning and/or the predictions of computational models. Avoid the temptation to put the cart before the horse and come up with experiments and studies that will produce an effect for its own sake. We want to test hypothesis generated from theories and also carry out exploratory work to help refine our theories. We don’t just want to generate effects.

The design comes first and you need to consider the logic of your experiment, what you plan to manipulate, and what you want to measure. We also want to avoid the temptation to add in more measures than we need, just to see if there’s an effect. For example, do you need to add in 2-3 measures of working memory, mood, or some demographic information just to see if there’s an effect there? If it’s not fully justified, it may hurt more than help because you have non-theoretically driven measures to contend with. I’ve been guilty of this and it always comes back to haunt me.

  • Work on experimental generation in lab meetings, advisory meetings, on your own.
  • Document your work in Google Docs.
  • Use G*Power to estimate correct sample size.
  • Use PsychoPy or Qualtrics to build your experiment.
  • Test these experiment protocols often.
  • Develop a script for research assistants who will be helping you carry out the study.
  • Share insights in lab meetings and in the relevant channel in Slack.

Analysis Plan & Ethics Protocol

This is where we start to formalize things. An analysis plan will link together the hypothesis and the experimental design with the dependent variables and/outcome measures. In this plan, we’ll describe and document how the data will be collected, visualized, analyzed, stored and shared. This plan should describe how we will deal with outlier data, missing data, data from participants who did not complete the experiment correctly, experimenter error, malfunction, etc. This plan can include tentative predictions derived from a model and also a justification of how we intend to analyze and interpret the data. This plan can (and probably should) be pre registered with OSF, which is where we’ll plan to share the data we collect with the scientific community.

At the same time we also want to write a description of our experiment, the research question, and procedures for the University REB. This will also include standardized forms for information and consent, a policy for recruitments, subject safety, data storage and security. The REB has templates and examples, and our lab Slack channel for ethics includes examples as well.

Both of these documents, the analysis plan and the ethics protocol should describe exactly what we are doing and why and should provide enough information that someone else would be able to reproduce our experiments in their own lab. These will also provide an outline for your eventual method section (ethics protocol) and your results section (analysis plan)

  • Document your analysis plan and ethics protocol work in Google Docs.
  • Link these documents to the project sheet for your project.
  • Share in the relevant channel in Slack.

Collect data

Once the experiment is designed, the stimuli have been examined, we’re ready to collect data. Before you run your first subject, however, there are some things to consider. Take some time to run yourself through every condition several times and ask a lab member to do the same. Use this process to make sure things are working exactly as you intend, to make sure the data are being saved on the computer, and to make sure the experiment takes as long as planned.

When you are ready to collect data for your experiment:

  • Meet with all of your  research volunteers to go over the procedure.
  • Book the experiment rooms on the Google Calendar.
  • Reserve a laptop or laptops on the Google Calendar.
  • Recruit participants though SONA or flyers.
  • Use our lab email for recruitment.

While you are running your experiment:

  • Document the study in Google Docs and/or Slack
  • Make a note of anything unusual or out of the ordinary.
  • Collect signatures from participants if you are paying them.
  • Data should stored in text files, excel, or Google sheets. Be sure these are linked to the project sheet.
  • Be sure to follow the data storage procedures outlined in the ethics protocol.

Data Management

Your data plan should specify where and how to store your data. While you are collecting data you should be working on a script in R (or Python) to extract and summarize data according to your plan. When you reach the planned sample size, ensure that all of that data are secure and backed up and do an initial summary with your R script.

As you work on summarizing and managing your data:

  • Make notes in the project sheet or a Google Doc about where the data are stored
  • Document your steps in an R Notebook (or Python Notebook).

Plots & Stats

When you have completed your experiment and taken care of the data storage and basic processing, it’s time to have fun and see what you discovered. The analysis plan is your guide and your analysis plan describes how you want to analyze the data, what your dependent variables are, and how to conduct statistical test with you data to test the hypothesis. But before you do any statistics, work on visualizing the data. Use your R notebook to document everything and generate boxplots, scatter plots, or violin plots to see the means, medians, and the distribution for the data.

Because you are using R Notebooks to do the analysis, you can write detailed descriptions of how you created the plot, what the plot is showing, and how we should interpret the plot.

You can also use R to conduct the tests that we proposed to use in the analysis plan. This might be straightforward ANOVA or t-test, LME models, regression, etc. Follow the plan you wrote, and if you deviate from the plan, justify and document that exploratory analysis.

If you are fitting a decision boundary model to your data, make sure you have the code for the model (these will be on my GitHub) and you should do your modelling separately from the behavioural analysis. The GLM models are saved as R scripts but you should copy or fork to your R-Notebooks for you analysis so you can document what you did. Make sure that you develop the version for you experiment and that the generic model is not modified.

If you are fitting a prototype or exemplar model, these have been coded in Python. Use Python 3 and a basic text editor or JupyterJab. JupyterLab might be better as it’s able to generate markdown and reproducible code like R Notebooks.

Present and explain each step

While you working on your analysis, you should present the work regularly in lab meetings for the rest of the group and we can discuss the work when we meet individually. The ideas is to keep the ideas and work fresh in your mind by reviewing it often. If you try to do too much at once, you may miss something or forget to document a step. Go over your work, make sure its documented, and then work on the new analyses, and repeat. The goal is to be familiar with your data and your analysis so that you can explain it to yourself, to me, to your peers, end eventually anyone who reads your paper.

Use the following guidelines for developing a lab meeting presentation or sharing with me or the group.

  • Make your best plots and figures.
  • Be able to present these to the lab on a regular basis.
  • Use RPubs to share summary work instantly.
  • Keep improving the analysis after each iteration.
  • You should always have 8-10 slides that you can present to the group.
  • Document your work in R Notebooks, Google Docs, and Google Slides.

Write papers around this flow

The final step is to write a paper that describes your research question, your experimental design, your analysis and your interpretation of what the analysis is. A scientific paper, in my opinion has two important features:

  1. The paper should be clear and complete. That means it describes exactly what you wanted to find out, how and why you designed your experiment, how you collected your data, how you analyzed your data, what you discovered, and what that means.  Clear and complete also means that it can be used by you or others to reproduce your experiments.
  2. The paper should be interesting. A scientific paper should be interesting to read. It needs to connect to a testable theory, some problem in the literature, an unexplained observation. It is just as long as it needs to be.

I think the best way to generate a good paper is to make good figures. Try to tell the story of your theory, experiment, and results with figures. The paper is really just writing how you made the figures. You might have a theory or model that you can use a figure to explain. You can create clear figures for the experimental design, the task, and the stimuli. Your data figures, that you made according to you analysis plan, will frame the results section and a lot of what you write is telling the reader what they show, how you made them, and what they mean figures. A scientific paper is writing a narrative for your figures.

Good writing requires good thinking and good planning. But if you’ve been working on your experiment according to this plan, you’ve already done a lot of the thinking and planning work that you need to do to write things. You’ve already made notes about the literature and prior work for your introduction. You have notes from your experimental design phase to frame the experiment. You have an ethics protocol for your methods section and an analysis plan for your results. You’ll need to write the discussion section after you understand the results, but if you’ve been presenting your 8-10 slides in lab meeting and talking about them you will have some good ideas and the writing should flow. Finally, if you’ve been keeping track of the papers in PaperPile, your reference section should be easy.

Submit the paper

The final paper may have several experiments, each around the theme set out in the introduction. It’s a record of what we did, why we did it, and how. The peer reviewed journal article is the final stage, but before we submit the paper we have a few other steps to ensure that our work roughly conforms to the principles of Open Science, each of which should be straightforward if we’ve followed this plan.

  • Create a publication quality preprint using the lab template. We’ll host this on PsyArXiv (unless submitting a blind ms.)  
  • Create a file for all the stimuli or materials that we used and upload to OSF.
  • Create a data archive with all the raw, de-identified data and upload to OSF.
  • Upload a clean version of your R Notebook that describe your analyses and upload to OSF.


As I mentioned at the outset, this might not work for every lab or every project. But the take home message–document everything you do and share your work for feedback–should resonate with most science and scholarship. Is it necessary to have a formal guide? Maybe not, though I found it instructive for me as the PI to write this all down. Many of these practices were already in place, but not really formalized. Do you have a similar document or plan for your lab? I’d be happy to hear in the comments below.

Psychology and the Art of Dishwasher Maintenance

The Importance of Knowing

It’s useful and powerful to know how something works. The cliché that “knowledge is power” may be a common and overused expression but that does not mean it is inaccurate.  Let me illustrate this idea with a story from a different area. I use this rhetorical device often, by the way. I frequently try to illustrate one idea with an analogy from another area. It’s probably a result of being a professor and lecturer for so many years. I try to show the connection between concepts and different examples. It can be helpful and can aid understanding. It can also be an annoying habit.

My analogy has to do with a dishwasher appliance. I remember the first time I figured out how to repair the dishwasher in my kitchen. It’s kind of a mystery how the dishwasher even works, because you never see it working (unless you do this). You just load the dishes, add the detergent, close the door, and start the machine. It runs its cycle out of direct view and when the washing cycle is finished, clean dishes emerge. So there’s an input, some internal state where something happens, and an output. We know what happens, but not exactly how it happens. We usually study psychology and cognition in the same way. We can know a lot about what’s going in and what’s coming out. We don’t know as much about what’s going in inside because we can’t directly observe it. But we can make inferences about what’s happening based on the function.

The Dishwasher Metaphor of the Mind

So let’s use this idea for bit. Let’s call it the “dishwasher metaphor“. The dishwasher metaphor for the mind assumes that we can observe the inputs and outputs of psychological processes, but not their internal states. We can make guesses about how the dishwasher achieves its primary function of creating clean dishes based on what we can observe about the input and output. We can also make guesses about the dishwasher’s functions by taking a look at a dishwasher that is not running and examining the parts. We also can make guesses about the dishwasher’s functions by observing what happens when it is not operating properly. And we can even make guesses about the dishwasher’s functions by experimenting with changing the input, changing how we load the dishes for example, and observing how that might affect the outputs. But most of this is careful, systematic guessing. We can’t actually observe the internal behaviour of the dishwasher. It’s mostly hidden from our view, impenetrable. Psychological science turns out to be a lot like trying to figure out how the dishwasher works. For better or worse, science often involves careful, systematic guessing

Fixing the Broken Dishwasher

The dishwasher in my house was a pretty standard early 2000s model by Whirlpool, though sold under the KitchenAid brand. It worked really well for years, but at some point, I started to notice that the dishes weren’t getting as clean as they used to. Not knowing what else to do, I tried to clean it by running it empty. This didn’t help. It seemed like water was not getting to the top rack. And indeed if I opened it up while it was running I could try to get an idea of what was going on. Opening stops the water but you can catch a glimpse of where the water is being sprayed. When I did this, I could observe that there was little or no water being sprayed out of the top sprayer arm. So now I had the beginnings of a theory of what was wrong, and I could begin testing hypotheses about this to determine how to fix it. What’s more, this hypothesis testing also helped to enrich my understanding of how the dishwasher actually worked.

Like any good scientist, I consulted the literature. In this case, YouTube and do-it-yourself websites. According to the literature, several things can affect the ability of the water to circulate. The pump is one of them. The pump helps to fill the unit with water and also to push the water around the unit at high enough velocity to wash the dishes. So if the pump was not operating correctly, the water would not be able to be pushed around and would not clean the dishes. But that’s not easy to service and also, if the pump were malfunctioning, it would not be filling or draining at all. So I reasoned that it must be something else.

There are other mechanisms and operations that could be failing and therefore restricting the water flow within the dishwasher. And the most probable cause was that something was clogging the filter that is supposed to catch particles from entering the pump or drain. It turns out that there’s a small wire screen underneath some of the sprayer arms. And attached to that is a small chopping blade that can chop and macerate food particles to ensure that they don’t clog the screen. But after a while, small particles can still build up around it and stop it from spinning, which stops the blades from chopping, which lets more food particles build up, which eventually restricts the flow of water, which means there’s not enough pressure to force water to the top level, which means there’s not enough water cleaning the dishes on the top, which leads the dishwasher to fail. Which is exactly what I had been observing. I was able to clean and service the chopper blade and screen and even installed a replacement. Knowing how the dishwasher works allowed me to keep a closer eye on that part, cleaning it more often. Knowing how the dishwasher worked gave me some insight into how to get cleaner dishes. Knowledge, in this case, was a powerful thing.

Trying to study what you can’t see

And that’s the point that I’m trying to make with the dishwasher metaphor.  We don’t necessarily need to understand how it works to know that it’s doing its job. We don’t need to understand how it works to use it. And it’s not easy to figure it out, since we can’t observe the internal state. But knowing how it works, and reading about how others have figured out how it works, can give you an insight into how the the processes work. And knowing how the processes work can give you and insight into how you might improve the operation, how you can avoid getting dirty dishes.

Levels of Dishwasher Analysis

This is just one example, of course and just a metaphor, but it illustrates how we can study something we can’t quite see. Sometimes knowing how something works can help in the operation and the use of that thing. More importantly, this metaphor can help to explain another theory of how we explain and study something. I am going to use this metaphor in a slightly different way and then we’ll put the metaphor away. Just like we put away the clean dishes. They are there in the cupboard, still retaining the effects of the cleaning process, ready to be brought back out again and used: a memory of the cleaning process.

Three ways to explain things

I think we can agree that there are different ways to clean dishes, different kinds of dishwashers, and different steps that you can take when washing the dishes. For washing dishes, I would argue that we have three different levels that we can use to explain and study things. First there is a basic function of what we want to accomplish, the function of cleaning dishes. This is abstract and does not specify who or how it happens, just that it does. And because it’s a function, we can think about it as almost computational in nature. We don’t even need to have physical dishes to understand this function, just that we are taking some input (the dirty dishes) and specifying an output (clean dishes). Then there is a less abstract level that specifies a process for how to achieve the abstract function. For example, a dishwashing process should first rinses off food, use detergent to remove grease and oils, rinse off the detergent, and then maybe dry the dishes. This is a specific series of steps that will accomplish the computation above. It’s not the only possible aeries of steps, but it’s one that works. And because this is like a recipe, we can call it an algorithm. When you follow these steps, you will obtain the desired results. There is also an even more specific level. We can imagine that there are many ways to build a system to carry out these steps in the algorithm so that they produce the desired computation. My Whirlpool dishwasher is one way to implement these steps. But another model of dishwasher might carry them out in a slightly different way. And the same steps could also be carried out by a completely different system (like on of my kids washing dishes by hand, for example). The function is the same (dirty dishes –> clean dishes) and the steps are the same (rinse, wash, rinse again, dry) but the steps are implemented by different system (one mechanical and the other biological). One simple task but there are three ways to understand and explain it.

David Marr and Levels of Analysis

My dishwasher metaphor is pretty simple and kind of silly. But there are theorists who have discussed more seriously the different ways to know and explain psychology. Our behaviour is one, observable aspect of this picture. Just as the dishwasher makes clean dishes, we behave to make things happen in our world. That’s a function. And just like the dishwasher, there are more that one way to carry out a function, and there are also more one way to build a system to carry out the function. The late and brilliant vision scientist David Marr argued that when trying to understand behaviour, the mind, and the brain, scientists can design explanations and theories at three levels. We refer to these as Marr’s Levels of Analysis (Marr, 1982). Marr worked on understanding vision. And vision is something that, like the dishwasher, can be studied at three different levels.


Marr described the Computational Level as an abstract level of analysis that examines the actual function of the process. We can study what vision does (like enabling navigation, identifying objects, even extracting regularly occurring features from the world) at this level and this might not need to be as concerned with the actual steps or biology of vision. But at Marr’s Algorithmic Level, we look to identify the steps in the process. For example, if we want to study how objects are identified visually, we specify the initial extraction of edges, the way the edges and contours are combined, and the how these visual inputs to the system are related to knowledge. At this level, just as in the dishwasher metaphor, we are looking at species of steps but have not specified how those steps might be implemented. That examination would be done at the Implementation Level where we would study the visual system’s biological workings. And just like with the dishwasher metaphor, the same steps can be implemented by different systems (biological vision vs computer visions, for example). Marr’s theory about how we explain things has been very influential in my thinking and in psychology in general. It gives us a way to know about something and study somethings at different levels of abstraction and this can lead to insights about biology, cognitions, and behaviour.

And so it is with the study of cognitive psychology. Knowing something about how your mind works, how your brain works, and how the brain and mind interact with the environment to generate behaviours can help you make better decisions and solve problems more effectively. Knowing something about how the brain and mind work can help you understand why some things are easy to remember and others are difficult. In short, if you want to understand why people—and you—behave a certain why, you need to understand how they think. And if you want to understand how people think, you need to understand the basic principles of cognitive psychology, cognitive science, and cognitive neuroscience.


Marr, D. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information (WH Freeman, San Fransisco, 1982).

The Cognitive Science Age


Complex patterns in the Namib desert resemble neural networks.

The history of science and technology is often delineated by paradigm shifts. A paradigm shift is a fundamental change in how we view the world and our relationship with it. The big paradigm shifts are sometimes even referred to as an “age” or a “revolution”. The Space Age is a perfect example. The middle of the 20th Century saw not only an incredible increase in public awareness of space and space travel, but many of the industrial and technical advances that we now take for granted were byproducts of the Space Age. 

The Cognitive Science Age

It’s probably cliche to write this but I believe we are at the beginning of a new age, and a new, profound paradigm shift. I think we’re well into the Cognitive Science Age. I’m not sure anyone calls it that, but I think that is what truly defines the current era. And I also think that an understanding of Cognitive Science is essential for understanding our relationships with the world and with each other. 

I say this because in the 21st century, artificial intelligence, machine learning, and deep learning are now being fully realized. Every day, computers are solving problems, making decisions, and making accurate predictions about the future…about our future. Algorithms decide our behaviours in more ways that we realize. We look forward to autonomous vehicles that will depend of the simultaneous operation of many computers and algorithms. Machines will (and have) become central to almost everything.

And this is a product of Cognitive Science. As cognitive scientists, this new age is our idea, our modern Prometheus.

Cognitive Science 

Cognitive Science is an interdisciplinary field that first emerged in the 1950s and 1960s and sought to study cognition, or information processing, as its own area of study rather than as a strictly human psychological concept. As a new field, it drew from Cognitive Psychology, Philosphy, Linguistics, Economics, Computer Science, Neuroscience, and Anthropology. Although people still tend to work and train in those more established traditional fields, it seems to me that society as a whole is in debt to the interdisciplinary nature of Cognitive Science. And although it is a very diverse field, the most important aspect in my view is the connection between biology, computation, and behaviour.

The Influence of Biology

A dominant force in modern life is the algorithm, as computational engine to process information and make predictions. Learning algorithms take in information, learn to make associations, make predictions from those associations, and then adapt and change. This is referred to as machine learning, but the key here is that machines learn biologically,

For example, the algorithm (Hebbian Learning) that inspires machine learning was discovered by the psychologist and neuroscientist Donald Hebb at McGill university. Hebb’s book on the The Organization of Behaviour  in 1949 is one of the most important books written in this field and explained how neurons learn associations. This concept was refined mathematically by the Cognitive Scientists Marvin Minsky, David Rumlehart, James McLelland, Geoff Hinton, and many others. The advances we see now in machine learning and deep learning are a result of Cognitive Scientists learning how to adapt and build computer algorithms to match algorithms already seen in neurobiology. This is a critical point: It’s not just that computers can learn, but that the learning and adaptability of these systems is grounded in an understanding of neuroscience. That’s the advantage of an interdisciplinary approach.

The Influence of Behaviour 

As another example, the theoretical grounding for the AI revolution was developed by Allen Newell (a computer scientist) and Herbert Simon (an economist). Their work in the 1950s-1970 to understand human decision making and problem solving and how to model it mathematically is provided a computational approach that was grounded in an understanding of human behaviour. Again, this an advantage of the interdisciplinary approach afforded by Cognitive Science. 

The Influence of Algorithms on our Society 

Perhaps one of the most salient and immediately present ways to see the influence of Cognitive Science is in the algorithms that drive the many products that we use online. Google is many things, but at its heart, it is a search algorithm and a way to organize the knowledge in the world so that the information that a user needs can be found. The basic ideas of knowledge representation that underlie Google’s categorization of knowledge were explored early on by Cognitive Scientists like Eleanor Rosch and John Anderson in the 1970s and 1980s. 

Or consider Facebook. The company runs and designs a sophisticated algorithm that learns about what you value and makes suggestions about what you want to see more of. Or, maybe more accurately, it makes suggestions for what the algorithm predicts will help you to expand your Facebook network… predictions for what will make you use Facebook more. 

In both of these cases, Google and Facebook, the algorithms are learning to connect the information that they acquire from the user, from you, with the existing knowledge in the system to make predictions that are useful and adaptive for the users, so that the users will provide more information to the system, so that it can refine its algorithm and acquire more information, and so on. As the network grows, it seeks to become more adaptive, more effective, and more knowledgeable. This is what your brain does, too. It causes you to engage in behaviour that seeks information to refine its ability to predict and adapts. 

These networks and algorithms are societal minds; They serve the same role for society that our own network of neurons serves our body. Indeed, these algorithms can even  change society. This is something that some people fear. 

Are Fears of the Future Well Founded?

When tech CEOs and politicians worry about the dangers of AI, I think that idea is at the core of their worry. The idea that the algorithms to which we entrust increasingly more of our decision making are altering our behaviour to serve the algorithm in the same way that our brain alters our behaviour to serve our own minds and body is somethings that strikes many as unsettling and unstoppable. I think these fears are founded and unavoidable, but like any new age or paradigm shift, we should continue to approach and understand this from scientific and humanist directions. 

The Legacy of Cognitive Science

The breakthroughs of the 20th and 21st centuries arose as a result of exploring learning algorithms in biology, the instantiation of those algorithms in increasingly more powerful computers, and the relationship of both of these concepts to behaviour. The technological improvements in computing and neuroscience have enabled these ideas to become a dominant force in the modern world. Fear of a future dominated by non-human algorithms and intelligence may be unavoidable at times but and understanding of Cognitive Science is crucial to being able to survive and adapt.


The fluidity of thought

Knowing something about the basic functional architecture of the brain is helpful in understanding the organization of the mind and in understanding how we think and behave. But when we talk about the brain, it’s nearly impossible to do so without using conceptual metaphors (when we talk about most things, it’s impossible to do so without metaphors). 

Conceptual metaphor theory is a broad theory of language and thinking from the extraordinary linguist George Lakoff. One of the basic ideas is that we think about things and organize the world into concepts in ways that correspond to how we talk about them. It’s not just that language directs thought (that’s Whorf’s idea), but that these two things are linked and our language also provides a window into how we think about things. 

Probably the most common metaphor for the brain is the “brain is a computer” metaphor, but there are other, older ideas.

The hydraulic brain

One interesting metaphor for brain and mind is the hydraulic metaphor. This probably goes back at least to Descartes (and probably earlier), who advocated a model of neural function whereby basic functions were governed by a series of tubes carrying “spirits” or vital fluids. In Descartes model, higher order thinking was handled by a separate mind that was not quite in the body. You might laugh at the ideas of brain tubes, but this idea seems quite reasonable as a theory from an era when bodily fluids were the most obvious indicators of health, sickness, and simply being alive: blood, discharge, urine, pus, bile, and other fluids are all indicators of things either working well or not working well. And when they stop, you stop. In Descartes time, these were the primary ways to understand the human body. So in the absence of other information about how thoughts and cognition occur it makes sense that early philosophers and physiologists would make an initial guess that thoughts in the brain are also a function of fluids.

Metaphors for thinking

This idea, no longer endorsed, lives on in our language in the conceptual metaphors we use to talk about the brain and mind. We often talk about cognition and thinking as information “flowing” as in the same way that fluid might flow. We have common expressions in English like the “stream of consciousness” or “waves of anxiety”, “deep thinking”, “shallow thinking”, ideas that “come to the surface”, and memories that come “flooding back” when you encounter an old friend. These all have their roots (“roots” is another conceptual metaphor of a different kind!) in the older idea that thinking and brain function are controlled by the flow of fluids through the tubes in the brain.

In the modern era, it sis still common to discuss neural activation as a “flow of information”. We might say that information “flows downstream”, or that there is a “cascade” of neural activity. Of course we don’t really mean that neural activation and cognition are flowing like water, but like so many metaphors it’s just impossible to describe things without using these expressions and in doing so, activating the common, conceptual metaphor that thinking is a fluid process.

There are other metaphors as well (like the electricity metaphor, behaviours being “hard wired”, getting “wires crossed”, an idea that “lights up”) but I think the hydraulic metaphor is my favourite because it captures the idea that cognition is fluid. We can dip our toes in the stream or hold back floods. And as you can seen from earlier posts, I have something of a soft spot for river metaphors.



A Curated Reading List

Fact: I do not read enough of the literature any more. I don’t really read anything. I read manuscripts that I am reviewing, but that’s not really sufficient to stay abreast of the field. I assign readings for classes, to grad students, and trainees and we may discuss current trends. This is great for lab, but for me the effect is something like me saying to my lab “read this and tell me what happened”. And I read twitter.

But I always have a list of things I want to read. What better way to work through these papers than to blog about them, right?

So this the first instalment of “Paul’s Curated Reading List”. I’m going to focus on cognitive science approaches to categorization and classification behaviour. That is my primary field, and the one I most want to stay abreast of. In each instalment, I’ll pick a paper that was published in the last few months, a preprint, or a classic. I’ll read it, summarize it, and critique. I’m not looking to go after anyone or promote anyone. I just want to stay up to date. I’ll have a new instalment on a regular basis (once every other week, once a month, etc.). I’m doing this for me.

So without further introduction, here is Reading List Item #1…

Smith, J. D., Jamani, S., Boomer, J., & Church, B. A. (2018). One-back reinforcement dissociates implicit-procedural and explicit-declarative category learning. Memory & Cognition,46(2), 261–273.


This paper was published on line last fall but was just published officially in Feb of 2018. I came across it this morning and I was looking at the “Table of Contents” email from Memory & Cognition. Full disclosure, the first author was my grad advisor from 1995-2000, though we have’t collaborated since then (save for a chapter). He’s now at Georgia State and has done a lot of fascinating work on metacognition in non-human primates.

The article describes a single study on classification/category learning. The authors are working within a multiple systems approach of category learning. According to this framework, a verbally-mediated, explicit system learns categories by trying to abstract use a rule, and a procedurally-mediated, implicit system learns categories by Stimulus Response (S-R) association. Both systems have well-specified neural underpinnings. These two systems work together but sometimes they are in competition. I know this theory well and have published quite of few papers on the topic. So of course, I wanted to read this one.

A common paradigm in this field is to introduce a manipulation that is predicted to impair or enhance one of the systems and leave the other unharmed in order to create a behavioural dissociation. The interference in this paper was a 1-back feedback manipulation. In one condition, participants received feedback right after their decision and in another, they received feedback about their decision on the previous trial. Smith et al. reasoned that the feedback delay would disrupt the S-R learning mechanism of the procedural/implicit system, because it would interfere with the temporal congruity stimulus and response. It should have less of an effect on the explicit system, since learners can use working memory to verbalize the rule they used and the response they made.


In the experiment, Smith et al. taught people to classify a large set (480) of visual stimuli that varied along two perceptual dimensions into two categories. You get 480 trials, and on each trial you see shape, make a decision, get feedback, and see another shape, and so on. The stimuli themselves are rectangles that vary in terms of size (dimension 1) and pixel density (dimension two). The figure below shows examples of range. There was no fixed set of exemplars, but “each participant received his or her own sample of randomly selected category exemplars appropriate to the assigned task”.


They used a 2 by 2 design with two between-subject factors. The first factor was category set. Participants learned either a rule based category (RB) in which a single dimension (size or density) creates an easily-verbalized rule, or an information integration category (II) in which both dimensions need to be integrated at a pre decisional stage. This II category can’t be learned very easily by a verbal rule and many studies have suggested it’s being learned by the procedural system. The figure below shows how many hundreds of individual exemplars would be divided into two categories for each of the each of the category sets (RB and II).


The second factor was feedback. After each decision, you either received feedback right after you made a decisions (0Back) or you received feedback one trial later (1Back). This creates a heavier task demand, so it should make it harder to learn the RB categories at first because the task creates a heavier working memory load. But it should interfere with II learning by the procedural system because the 1-Back disturbs the S-R association.


So what did they find? The learning data are plotted below and suggest that the 1Back feedback made it harder to learn the RB categories at first, and seemed to hurt the II categories at the end. The 3-way ANOVA (Category X Feedback X Block) provided evidence to that effect, but it’s not an overwhelming effect. Smith et al.’s decision to focus a follow up analysis on the final block was not very convincing. Essentially, they compared means and 95% CIs for the final block for each of the four cells and found that performance in the two RB conditions did not differ, but performance in the two II conditions did. Does that mean that the feedback was disrupting the feedback? I’m not sure. Maybe participants in that condition (II-1Back) were just getting weary of a very demanding task. A visual inspection of the data seems to support that alternative conclusion as well.  Exploring the linear trends might have been a stronger approach.


The second analysis was a bit more convincing. They fit each subject’s data with a rule model and an II model. Each model tried to account for each subject’s final 100 trials. This is pretty easy to do and you are just looking to see which model provides the most likely account of the data. You can then plot the best fitting model. For subjects who learned the RB category, the optimal rule should be the vertical partition and for the II category, the optimal model is the diagonal partition.

As seen the figure below, the feedback did not change the strategy very much for subjects who learned the RB categories. Panel (a) and (b) show that the best-fitting model was usually a rule based one (the vertical partition). The story is different for subjects learning II categories. First, there is way more variation in best fitting model. Second, very few subjects in the 1-back condition (d) show evidence of the using the optimal rule (the diagonal partition).



Smith et al concluded: “We predicted that 1-Back reinforcement would disable associative, reinforcement-driven learning and the II category-learning processes that depend on it. This disabling seems to have been complete” But that’s a strong conclusion. Too strong. Based on the modelling, the more measured conclusion seems to be that about 7-8 of the 30 subjects in the II-0Back condition learned the optimal rule (the diagonal) compared to about 1 subject in the II-1Back. Maybe a just handful of keener’s ended up in the II-0Back and learned the complex structure? It’s not easy to say. There is some evidence in favour of Smith et al’s conclusion but its not at all clear.

I still enjoyed reading the paper. The task design is clever, and the predictions flow logically from the theory (which is very important).It’s incremental work. It adds to the literature on the multiple systems theory but does not (in my opinion) rule out a single-system approach. But I wish they had done a second study as an internal replication to explore the stability of the result. Or maybe a second study with the same category structure but different stimuli. It’s incremental work. It adds to the literature on the multiple systems theory but does not (in my opinion) rule out a single-system approach.

Tune in a few weeks for the next instalment. Follow my blog if you like and check the tag to see the full list. As the list grows, I may create a better structure for these, too.