Category Archives: Reading List

Use Paperpile’s Annotation System

Reading scientific papers as PDFs is a major part of being an academic. Professors, postdocs, grad students and undergraduates end up working with PDFs, making notes, and then using those notes to write a manuscript or paper. Although there are lots of great PDF viewers and reference managers, I use Paperpile, which is cloud-based PDF manager and was originally designed to be a reference manager for Google Docs. It can sync all your PDFs with your Google Drive (so you can read them offline) and neatly integrates with Google Scholar and Chrome so that you can import references and PDFs from anywhere. It handles citations in Docs and in Word and has a beta app for iPad that is brilliant.

We use this in my lab all the time. It’s a paid app, but it is not very expensive, they have education pricing and as the lab PI, I just pay for a site license for all my trainees.

Making Notes

One of the best features is the notes-taking and annotation system. Like most PDF viewers, the built in PDF viewers lets you highlight, markup, and annotate PDFs with sticky notes. These annotations stay with the PDF and will sync across devices because it’s cloud based. Just like in Adobe or Apple Preview, you can highlight, add notes, use strike-thru, or even draw with a pencil. Paperpile organizes these well and makes it easy to navigate. And if you use the iPad app you can make notes there that will show up in the browser and notes in the browser will show up on the iPad. The icon on the upper right hides your notes.

Screen Shot 2020-01-18 at 6.49.03 AM

Exporting and Sharing

If you’re reading along, taking notes and making highlights, you may want to share these with someone or use them in a manuscript (or even a manuscript review). There are several ways to do this.

Export the PDF

The File menu lets you print with or without annotations. If you want to send the PDF to someone as a clean PDF without your notes, that’s easy to do. Or it will save your notes in the new PDF.

Screen Shot 2020-01-18 at 6.49.58 AM

The exported PDF opens in other PDF viewers with your notes intact and editable (Apple Preview is shown below). This is great to share with someone who does not use Paperpile. Of course, you can print a clean PDF without the annotations.

Screen Shot 2020-01-18 at 6.51.12 AM

Export the annotations only

If you planning to write an an annotated bibliography, a critical review, a meta analysis, a paper for class or even a manuscript review for a journal, the ability is export the notes is invaluable. Using the same File menu, you will see the “export” option. This lets you export just your notes and highlights in several formats. If you want it to share on line, for examples, try the HTML option. This is great if you are writing a blog and wanted to include screen shots and notes. Notice that this keeps the annotations (notes, images, highlights) on the right and data about who made the notes on the left. Helpful if more than one person are making notes.

Screen Shot 2020-01-18 at 7.10.12 AM And of course, if you’re using this annotation tool to make notes for your own paper or a manuscript review, you can export just your notes as text or markdown and open in Google Docs, Word, or any editor and use those to help frame your draft. You have the contents of the notes as text and can quote highlighted text. Images are not saved, of course.

Conclusion

In my opinion, Paperpile is the best reference manager and PDF manager on the scene. Others, like Zotero, Mendeley, and Endnote are also good (and Zotero is free, of course). Each has things they do really well, but if you already use Paperpile, or are curious about it, I strongly suggest you spend some time with the PDF viewer and annotations. It’s really changed my workflow for the better. It’s just such well designed software.

 

Comments, corrections, and suggestion are always welcome.

There Are Two Kinds of Categorization Researchers

Dual process accounts of cognition are ubiquitous. In fact the one thing you can count on is that there are two kinds of cognitive scientists: Those who think there are two systems and those who don’t.  My research has generally argued for the existence of two systems. Though the more I do research in this area the less convinced I am.

With that in mind, I really enjoyed reading this new paper from Mike Le Pelley and Ben Newell at UNSW and Rob Nosofsky at IU. They are commenting on an earlier paper by J. David Smith, Greg Ashby and colleagues. In this blog post, I’m going to review both of them, and argue that neither one of them gives us a complete picture.  Both are also missing a critical question.

The Multiple Systems Approach

Smith’s paper reported on an experiment in which they asked people to learn perceptual categories that either had a single dimensional rule, or a two-dimensional structure that could be learned without a rule. Subjects learned under one of two conditions. Either they received feedback immediately after making a classification,  or feedback was deferred and delivered after five trials in a row.

Screen Shot 2019-07-31 at 1.08.01 PM

The figure above, which I copied from their paper, illustrates the conceptual structure of each category set. Panel A shows the 1-dimensional linear boundary between the two clusters of exemplars (a rule) and panel B shows a 2-dimensional, diagonal boundary between the two cluster of exemplars,  which is not an easily verbalized rule.

The actual images subjects saw were pixel displays that varied in size and density. The figure below shows the full range, and you can imagine that in the single dimensional case,  you would learn to categorize slightly larger things as belonging to one group and slightly smaller things as belonging to the other group, and you would ignore density. For the non rule defined (information integration) categories you would incorporate both size and density. To leann, you see a single stimulus in the screen and respond with Category A or B and then feedback (or not).  This continues for a few 100 trials

Screen Shot 2019-07-31 at 1.10.53 PM

What they discovered was that when feedback was immediate, people learning the rule described set (panel B) tended to learn the rule and people learning the diagonal category set (panel B) learned the diagonal boundary, just as expected. But when feedback was deferred, it only seemed to affect the diagonal boundary. Most subjects who learned the diagonal category set with deferred feedback were unable to learn the diagonal boundary. Instead, most seem to learn some kind of single dimensional boundary that was suboptimal.  In the figure below, panels A and B show that immediate and deferred feedback did not seem to affect performance on the rule-based category set. Figure panels C and D show that deferred feedback seem to make it nearly impossible for participants to learn the correct diagonal boundary. Figure panel D shows that no subjects who learned the diagonal categories actually seemed to be using the optimal diagonal boundary. In other words, the deferred feedback ruined their performance.

Screen Shot 2019-07-31 at 1.12.25 PM

Smith and colleagues interpreted this as evidence for two systems that underlie the category learning process. One system relies on verbal working memory and is able to learn the rule defined structure. It’s not affected by the deferred feedback because subjects can hold the responses that they made in working memory until the feedback is delivered. But the diagonal case depends on the implicit associative learning system. This system relies heavily on a hypothesized dopaminergic learning system. In order for that to work, there needs to be a temporal proximity between stimulus, response, and feedback. When you disrupt that by deferring the feedback delivery you disrupt the learning process. Smith and colleagues argued this was one of the strongest associations in the literature.

They write:

“We hypothesized that deferred reinforcement should disable associative learning and the II category learning that depends upon it. Deferred reinforcement eliminated II category learning. There may be no comparably strong demonstration in the literature.
We hypothesized that RB learners hold their category rule in working memory, still
allowing its evaluation for adequacy at the end of the trial block when deferred
reinforcement finally arrives. Confirming this hypothesis, RB learning was unscathed by
deferred reinforcement.”

I read this paper when it first came out and I agreed with them. I thought it was a terrific case.

Evidence against the multiple system approach

The new paper that was just published by Mike Le Pelley and colleagues argues that the dissociation is not as strong as it seems. Or rather, it doesn’t support the existence of two systems. According to their approach, all of the learning happens within the same system but the more cognitively demanding the task is, the more likely it is to rely on some of the executive functions, or working memory.  The single dimensional rule is fairly easy to learn and so it’s not affected by the deferred feedback intervention. The diagonal, information integration category set is more cognitively demanding and so it is affected by the deferred feedback intervention. They went one step further. They asked a group of subjects to learn a third category set. One that is cognitively demanding but one that can be learned by explicitly verbalizable rule. I have used a category set like this in the past, and also argue that it engages an explicit category learning system.

Le Pelley et al’s  three different category sets are shown below. On the left, the single dimensional vertical rule. In the middle, the diagonal rule, and on the right a two-dimensional conjunctive structure.Screen Shot 2019-07-31 at 1.13.05 PM

They asked their subjects to learn images that were single blue lines on the screen. These blue lines varied in length (which corresponded to the X axis on the plots above) and the angle of the line on the screen (which corresponded to the y-axis on the plot above) They also asked subjects to learn these in either the immediate feedback condition or the deferred feedback condition, the same as Smith and colleagues.

Their data are shown below. As you can see, the deferred feedback did not interfere with the single dimensional category set. Participants are performing well in both conditions and the majority of them are using a optimal linear boundary (the “best fitting model”). And just like Smith et al., they found that the deferred feedback condition interfered strongly with performance on the diagonal set. Performance was reduced and people were less likely to use the optimal diagonal boundary.

Screen Shot 2019-07-31 at 1.14.54 PM

However there’s a twist: the deferred feedback also interfered with the conjunctive rule category set.  This undermines a multiple system approach. If subjects were learning these rule defined categories with an explicit verbal system that was not affected by the deferred feedback, performance on this categories that should not have been affected. but it was.  The figure shows that performance was reduced, and people were less likely to use the optimal two-dimensional conjunctive boundary when receiving deferred feedback. The main difference between that category set and the diagonal category set is that they are both complex two-dimensional complex structures.

Or as they write:

“These findings do not follow from Smith et al.’s multiple-systems account but follow naturally from a cognitive-demands account: The cognitive complexity and memory demands of diagonal and conjunction tasks are greater than for the vertical task, so deferring feedback will impair both two-dimensional tasks and may drive participants to a less-demanding unidimensional strategy.”

My Interpretation

My impression is that these are both really well done studies. The original paper by Smith et al. tested a clear hypothesis of the multiple systems approach. They equated their category sets for complexity and difficulty and only interfered with the delivery of the feedback. And consistent with the procedural nature of the implicit system, only the non-rule defined set was affected. As well, this result is broadly consistent with research from my lab, Smith’s lab, Ashby’s lab, and many others.

But are do these data imply two separate systems? Although the complexity of the conceptual structure was equated in Smith’s design, the representations that were required to learn the category set were not. People learning the diagonal rule had to learn more than one dimension and when Le Pelley et al. asked participants to learn a two-dimensional rule described set, they found evidence of deferred feedback interference. That seems broadly inconsistent with the multiple systems approach.  When feedback was immediate, participants could learn the conjunctive role and they did so by primarily defending a two-dimensional conjunctive rule boundary. When the feedback was deferred, their performance was reduced and they lost access to the two-dimensional conjunctive rule.

However these aren’t really compatible studies in my view. Why?

  • First of all the category sets, although similar in a broad sense, are not equivalent. The boundaries were more separable and the exemplars were more tightly clustered in perceptual/psychological space in Smith’s paper. The cluster of exemplars in Le Pelley’s work was broader and and more diffuse.
  • Secondly, there may be a difference between learning to classify a single line on the screen (Le Pelley) that varies by length and orientation compared to a rectangle (Smith) that varies in terms of pixel density and size. I don’t know if this makes a difference. There may be an emergent dimension in Smith’s case, some combination of size and density that is not controlled for.

These are not problems in and of themselves, but they do make it difficult to determine whether or not Le Pelley’s work is a clear or challenge to Smith’s work. It seems to be, but someone needs to run the exact same edition to see if Smith’s work is replicable. or the reverse, it might be worth looking at whether or not the pixel density rectangles allow for Le Pelley is work to be replicated.

One of the things I like best about Le Pelley’s work is that the data were collected online. and all of the data and modelling scripts are available and open. You can even look at an example of the study they ran and for yourself.  I hope to see more studies like this.  you may even see more studies like this from my lab. As we are designing some tasks that will work this way.

Unanswered Questions

I said at the outset I thought that they’re missing an important part of the question.  One of the most interesting things to me is how and why participants adopt different strategies. In some cases, there is a clear and easy optimal rule. But often only 60% or 70% of the subjects find that rule. Why? And those that don’t find the rule often use another strategy, one that is suboptimal. Why and how to they choose? What individual difference, cognitive difference, or local variable allows you to predict whether or not participants will find the optimal boundary? I think that’s a really unexplored question. Neither of these two studies get at that, and that seems to be orthogonal to the multiple systems approach it strikes me as an area ripe for investigation.

I’ll add it to my list…

 

 

 

A Curated Reading List

Fact: I do not read enough of the literature any more. I don’t really read anything. I read manuscripts that I am reviewing, but that’s not really sufficient to stay abreast of the field. I assign readings for classes, to grad students, and trainees and we may discuss current trends. This is great for lab, but for me the effect is something like me saying to my lab “read this and tell me what happened”. And I read twitter.

But I always have a list of things I want to read. What better way to work through these papers than to blog about them, right?

So this the first instalment of “Paul’s Curated Reading List”. I’m going to focus on cognitive science approaches to categorization and classification behaviour. That is my primary field, and the one I most want to stay abreast of. In each instalment, I’ll pick a paper that was published in the last few months, a preprint, or a classic. I’ll read it, summarize it, and critique. I’m not looking to go after anyone or promote anyone. I just want to stay up to date. I’ll have a new instalment on a regular basis (once every other week, once a month, etc.). I’m doing this for me.

So without further introduction, here is Reading List Item #1…

Smith, J. D., Jamani, S., Boomer, J., & Church, B. A. (2018). One-back reinforcement dissociates implicit-procedural and explicit-declarative category learning. Memory & Cognition,46(2), 261–273.

Background

This paper was published on line last fall but was just published officially in Feb of 2018. I came across it this morning and I was looking at the “Table of Contents” email from Memory & Cognition. Full disclosure, the first author was my grad advisor from 1995-2000, though we have’t collaborated since then (save for a chapter). He’s now at Georgia State and has done a lot of fascinating work on metacognition in non-human primates.

The article describes a single study on classification/category learning. The authors are working within a multiple systems approach of category learning. According to this framework, a verbally-mediated, explicit system learns categories by trying to abstract use a rule, and a procedurally-mediated, implicit system learns categories by Stimulus Response (S-R) association. Both systems have well-specified neural underpinnings. These two systems work together but sometimes they are in competition. I know this theory well and have published quite of few papers on the topic. So of course, I wanted to read this one.

A common paradigm in this field is to introduce a manipulation that is predicted to impair or enhance one of the systems and leave the other unharmed in order to create a behavioural dissociation. The interference in this paper was a 1-back feedback manipulation. In one condition, participants received feedback right after their decision and in another, they received feedback about their decision on the previous trial. Smith et al. reasoned that the feedback delay would disrupt the S-R learning mechanism of the procedural/implicit system, because it would interfere with the temporal congruity stimulus and response. It should have less of an effect on the explicit system, since learners can use working memory to verbalize the rule they used and the response they made.

Methods

In the experiment, Smith et al. taught people to classify a large set (480) of visual stimuli that varied along two perceptual dimensions into two categories. You get 480 trials, and on each trial you see shape, make a decision, get feedback, and see another shape, and so on. The stimuli themselves are rectangles that vary in terms of size (dimension 1) and pixel density (dimension two). The figure below shows examples of range. There was no fixed set of exemplars, but “each participant received his or her own sample of randomly selected category exemplars appropriate to the assigned task”.

1F890860-F5FB-4D01-81DD-CFB2D765A31B

They used a 2 by 2 design with two between-subject factors. The first factor was category set. Participants learned either a rule based category (RB) in which a single dimension (size or density) creates an easily-verbalized rule, or an information integration category (II) in which both dimensions need to be integrated at a pre decisional stage. This II category can’t be learned very easily by a verbal rule and many studies have suggested it’s being learned by the procedural system. The figure below shows how many hundreds of individual exemplars would be divided into two categories for each of the each of the category sets (RB and II).

3B0F0A79-A9D8-4CA4-A68E-4D350908BA90

The second factor was feedback. After each decision, you either received feedback right after you made a decisions (0Back) or you received feedback one trial later (1Back). This creates a heavier task demand, so it should make it harder to learn the RB categories at first because the task creates a heavier working memory load. But it should interfere with II learning by the procedural system because the 1-Back disturbs the S-R association.

Results

So what did they find? The learning data are plotted below and suggest that the 1Back feedback made it harder to learn the RB categories at first, and seemed to hurt the II categories at the end. The 3-way ANOVA (Category X Feedback X Block) provided evidence to that effect, but it’s not an overwhelming effect. Smith et al.’s decision to focus a follow up analysis on the final block was not very convincing. Essentially, they compared means and 95% CIs for the final block for each of the four cells and found that performance in the two RB conditions did not differ, but performance in the two II conditions did. Does that mean that the feedback was disrupting the feedback? I’m not sure. Maybe participants in that condition (II-1Back) were just getting weary of a very demanding task. A visual inspection of the data seems to support that alternative conclusion as well.  Exploring the linear trends might have been a stronger approach.

1C1FA11F-E898-435A-845B-B890030342CB

The second analysis was a bit more convincing. They fit each subject’s data with a rule model and an II model. Each model tried to account for each subject’s final 100 trials. This is pretty easy to do and you are just looking to see which model provides the most likely account of the data. You can then plot the best fitting model. For subjects who learned the RB category, the optimal rule should be the vertical partition and for the II category, the optimal model is the diagonal partition.

As seen the figure below, the feedback did not change the strategy very much for subjects who learned the RB categories. Panel (a) and (b) show that the best-fitting model was usually a rule based one (the vertical partition). The story is different for subjects learning II categories. First, there is way more variation in best fitting model. Second, very few subjects in the 1-back condition (d) show evidence of the using the optimal rule (the diagonal partition).

39925827-64C8-40ED-923F-2309265CD7FB

Conclusions

Smith et al concluded: “We predicted that 1-Back reinforcement would disable associative, reinforcement-driven learning and the II category-learning processes that depend on it. This disabling seems to have been complete” But that’s a strong conclusion. Too strong. Based on the modelling, the more measured conclusion seems to be that about 7-8 of the 30 subjects in the II-0Back condition learned the optimal rule (the diagonal) compared to about 1 subject in the II-1Back. Maybe a just handful of keener’s ended up in the II-0Back and learned the complex structure? It’s not easy to say. There is some evidence in favour of Smith et al’s conclusion but its not at all clear.

I still enjoyed reading the paper. The task design is clever, and the predictions flow logically from the theory (which is very important).It’s incremental work. It adds to the literature on the multiple systems theory but does not (in my opinion) rule out a single-system approach. But I wish they had done a second study as an internal replication to explore the stability of the result. Or maybe a second study with the same category structure but different stimuli. It’s incremental work. It adds to the literature on the multiple systems theory but does not (in my opinion) rule out a single-system approach.

Tune in a few weeks for the next instalment. Follow my blog if you like and check the tag to see the full list. As the list grows, I may create a better structure for these, too.