Wednesday, 20 December 2017

social pedagogy

What is social pedagogy?

This question first arose for me a few years ago when I started implementing e-portfolios in some of my courses and published an article on its efficacy to promote student learning outcomes. Back then there were a few articles (see below) that seemed to suggest to me that social pedagogy simply involves having students make their learning public. I found this dis-satisfying because it didn't explain to me how one makes this a requirement or how one grades it. It seems to me that it involves inviting students to publicize their learning, but then how does one assess that? My 27 years of teaching experience has taught me that unless instructors assess something, students will not attend to it. In teaching and learning, assessment is what produces value for students. If instructors do not assess it, students are being sent the message that the instructor does not value it. A bit black and white I know, but that is my sense of the interaction between students, instructors, and student learning. Granted, when students become independent learners, assessment is done by the students themselves. But that only happens after students have developed some expertise in learning and understand the value of self-assessment and how to go about engaging in the constructive criticism of their own abilities and learning.

After a short perusal of the web, it appears that some (see resources below) use social pedagogy to define a way of being in community and caring for each others' learning. I guess that could encompass e-portfolio practice. But it seems that those practitioners are advocating for living in community. Would this include e-portfolio practice? I don't understand how advocates of e-portfolio practice are invoking social pedagogy in this context.

So, back to my question: what is social pedagogy? Pedagogy involves an understanding of how teaching and learning are practised. The articles on social pedagogy in an e-portfolio context advocate for students to make their own learning public and to publically respond to the feedback they receive in the public forum. But I am having difficulty understanding how we, as instructors, support students' forays into publicizing their learning, and more importantly how we support our students' response to their peers publicized learning. The closest understanding I have is the peer review in which students provide their fellow students with constructive criticism of their peers' term papers, presentations, participation, etc. But I don't think this is really what those who have successfully implemented e-portfolios are meaning when they advocate for social pedagogy. Or rather, it is more than simply peer review, because peer review can be a private transaction between only two students. In contrast, e-portfolios are intended to make student learning public; i.e. beyond the classroom. How does this get assessed and supported?

The reason this is an issue for me is that I understand how making my learning public provokes me to really consider my thinking. Hence, why I keep this blog. But this does take courage to make public our own misunderstandings and the possibility of being found to have a mistake in our thinking - in a public forum. One has to have the courage to make learning public. So, many of my students are very reticent to open their e-portfolios to the public and encourage responses from the public. As instructors do we simply tell students to get over it and grow up? That seems a little harsh to me. What practices do instructors use to help students understand the value of making their learning public? How do we assess our students' participation in social pedagogy on both the giving and receiving end? Is it a matter of simply indicating in our e-portfolio rubric that it is a requirement for e-portfolios to be public and to not do so elicits a failing grade? That doesn't make sense to me. Rather, there must be some approach that enables students to see the value in publicizing their learning. How do we do that beyond simply assigning some marks for the number of comments they give or receive on their e-portfolios?

How do we assess students' publication of their learning in the midst of their learning? This is different from simply publishing their final paper - that is summative assessment. I want to know how to provide students with a formative assessment of their social learning.

What is social pedagogy in higher education?


Bhika, R., Francis, A., & Miller, D. (2013). Faculty professional development: Advancing integrative social pedagogy using ePortfolio. International Journal of ePortfolio, 3(2), 117–133.

Editors. (2011 May 3). What is Social Pedagogy? The Therapeutic Care Journal.

Eynon, B., & Gambino, L. M. (2017). High-impact ePortfolio practice: A catalyst of student, faculty, and institutional learning. Sterling,VA: Stylus Publishing, LLC.

Eynon, B., Gambino, L. M., & Török, J. (2014). Completion, quality, and change: The difference e-portfolios make. Peer Review, 16(1).

Eynon, B., Gambino, L. M., & Török, J. (2014). What difference can eportfolio make? A field report from the connect to learning project. International Journal of ePortfolio, 4(1), 95–114.

Five Rivers Child Care Center. 2017. The application of social pedagogy at Five Rivers: What is social pedagogy?

Gambino, L. M. (2014). Putting e-portfolios at the center of our learning. Peer Review, 16(1).

Jensen, N. R. (2013). Social pedagogy in modern times. Education Policy Analysis Archives, 21(43), 1–16.

Rafeldt, L. A., Bader, H. J., Lesnick Czarzasty, N., Freeman, E., Ouellet, E., & Snayd, J. M. (2014). Reflection builds twenty-first-century professionals. Peer Review, 16(1).

Storø, J. (2013). Practical social pedagogy: Theories, values and tools for working with children and young people. Policy Press. University of Bristol.

Wednesday, 17 May 2017

can/should instructors force students to be metacognitive about their learning?

One of the things that I have learned about using e-portfolios as a teaching & learning strategy is that when it is a large assignment, it is unnecessary to have students complete both the e-portfolio and write the final exam. For the last couple of years, I have given students the choice between writing the final exam and preparing an e-portfolio in Augustana's fourth-year biology capstone course because there is a good correlation between what students achieve on the final exam and the final e-portfolio. This is what I found a couple of years ago when I was still requiring students to complete both the final exam and e-portfolio:

As you can see there is a significant correlation between the exam and e-portfolio marks: I only need to use one or the other to determine whether or not students have learned the material. This only works in this capstone course because the e-portfolio includes a writing dossier in which students must synthesize and digest the course readings. What I have learned is that instructors cannot force students to be metacognitive. The year that the data for the above correlation was collected, some students thoroughly enjoyed and appreciated completing the e-portfolio but there were some who felt it was a heinous assignment. I don't think learning occurs when a learner has such a visceral reaction to an assignment. I don't really understand why some students reacted so strongly against the e-portfolio when simultaneously other students gave it very high praise.

It is apparent to me from the personal comments from some alumni of the course and also from the comments I read on the anonymous Student Evaluations of Teaching that some students resented being forced to complete the e-portfolio because the assignment felt like make-work - they didn't understand the point of the e-portfolio. It is interesting that the alumni understand the objective of the assignment now a couple of years post-graduation. But the alumni thought that this sort of metacognitive ability needs to be developed over many years and to require it of students during the last term of their last year is the wrong time to do it - it is too late.

Interestingly an Augustana colleague of mine explained it to me in terms of many students in the last term of their last year have already checked out. They are either depleted or are already looking forward to the next phase of their lives. These graduating students are no longer focused on completing their degree. My colleague has observed this happen with his students when they are wrapping up their senior thesis. So, again, an argument that having students reflect on their education for the first time just before they graduate is the wrong time to do it. The whole point of students reflecting on their own learning is to have them think about how and why they learn with the goal that they will realize that there are better and poorer ways to learn and that thinking about this early in their education may produce the benefit of developing better learners. It is too late to do this in the term before students graduate. On the other hand, if students have an understanding of what it means to be a life-long learner, then it won't matter when they are asked to be metacognitive about their learning: earlier is better, but later is better than never.

One of my alumni suggested that making the e-portfolio optional is critical because not all students will be ready or willing to consider how and why they learn. Particularly in their last term of their degree. Their reasoning was that the assignment requires students to take a critical look at how and why they learn, and thus places them in a vulnerable place. It may be unsettling to critically reflect at the end of your program and realize that you have been going about learning using the wrong approach for the last four years. No wonder some students might become angry at me for forcing that realization on them. Doing this early in their learning career, however, allows them to make choices about how and why they learn - there is time for corrective action or at least a considered reason for not taking action. Just before they graduate is too late.

I had never thought how the assignment might place students in a vulnerable place. But of course, if I think about it carefully, that is the intention of such an assignment. To invite students to be vulnerable/open to reconsidering why and how they learn. At the end of a degree may be the wrong time to do that.

It requires more investigation on my part but I think the bottom line is that you cannot force students to be metacognitive. What I need to determine as a mature instructor still learning how to teach is how to enable students to realize that metacognition powerfully impacts their ability to deeply learn. And I think it involves supporting students earlier in their learning careers to be reflective about their learning as some of my past students have suggested to me. This is what I am currently attempting to do with the learning philosophy assignment which I have implemented across all year levels of the courses I teach. So far the results are promising. But, I don't force students to complete a learning philosophy - that is something they can choose to do as an optional assignment. Thus, students can be reflective about their learning when they are ready.

Can (should) instructors force students to be metacognitive about their learning? I think, as with teaching in general, all we can do is present students with the educational opportunity and then it is up to them whether or not to engage with the learning process.

Tuesday, 28 March 2017

developing students' mastery

Our SoTL Teaching Circle met last week to discuss the fourth chapter of How Learning Works which asks the question: how do students attain mastery? Ambrose et al explain that there are three elements to developing mastery of a skill or body of knowledge:
  1. the component skills - this is knowledge of how to do something or how something works. In my mind this involves the rote portion of knowledge - learners simply need to know the language or the steps before being able to speak, think, or perform in the discipline;
  2. Those component skills then need to be integrated into our existing knowledge structure. This assumes a constructivist understanding of knowledge. Until our existing mental models are restructured to incorporate the new learning, what we have just learned will not stick - it will not make sense with the rest of our worldview;
  3. Finally, students need to understand in which contexts the newly learned skill may be applied. They need to know when and where our knowledge will be useful. When is it appropriate to use a hammer? When is it appropriate to use a screwdriver?
Our teaching circle discussion focused on how to develop these different aspects of mastery in our students. A common approach was to provide practice inside of our classrooms during which we as instructors can help students integrate their learning and understand when to apply their skills. Clearly, this requires aspects of active learning. And it seems that the best way to do this is to flip the classroom so that some of the component skills (rote learning) is done outside of class, before class, so that there is time inside of class to practice integrating and contextualizing the knowledge. 

But, knowing when to flip for an instructor. Sometimes students need help with the component skills (What is significant of that particular chemical structure? How is this word pronounced? What does this sentence mean?). Thus, depending upon the level of the particular course content (introductory, intermediate, advanced) will impact how much we can leave to students for the first contact with the course material and therefore how much time we have in class for nurturing students integration and contextualization of that course content.

Which means that as designers of learning experiences we need to be clear ourselves of what can be reasonably expected for students to truly learn. And I think this comes down to making choices about depth and breadth of what we teach our students. If the course has a great breadth of content, then maybe the course really needs to only expect students to develop the component skills. On the other hand, if the course objective is to have students learn the material in great depth, then maybe not as much content can be included in the course.

The difficulty I have in teaching is that I have found that learning for students does not stick very well if it is mostly breadth that they are learning. My experience suggests that when students learn something to a great depth, it has more impact on their understanding and thus the newly earned knowledge tends to stick. This seems to make sense. The pedagogical literature is clear that depth of learning produces better outcomes than superficial learning. Is that the same as breadth and depth of a course's content? I am not sure. But clearly, if we want our students' learning to be to a sufficient depth of understanding, then the course content cannot be too broad. Clearly, there is a balance between breadth and depth of content that needs to be carefully designed for each topic/discipline and the level of prior knowledge of students enrolling in the course.


Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How do students develop mastery? In How learning works: Seven research-based principles for smart teaching (pp. 117–146). San Francisco, CA: Jossey-Bass, a Wiley imprint.

Thursday, 23 March 2017

motivating our students to learn

Our SoTL Teaching Circle met again a couple of weeks ago to discuss the third chapter of How Learning Works by Ambrose et al. All of us strive to motivate our students to learn because we understand that unless the motivation to learn and master the material exists, then learning is simply too painful to occur. Learning is difficult and time-consuming work. So, if students aren't motivated to study, they simply will not exert the effort required to learn. Ambrose et al explain in this chapter that there are a number of factors that impact students' motivation to learn. Students need to be aware of how the course material enables their ability to achieve their own goals, that the course material has value to them, and that the time and effort required to learn the material will produce acceptable results. Thus, if students do not believe that success in the course will occur as a result of their own efforts, then they will not put in the effort required to succeed. Similarly, if students do not view the particular course as contributing to their own career, professional or development goals, they will not find it worth their effort to learn.

Studies reported in this third chapter indicate that motivation is impacted by both internal and external factors and that intrinsic motivation produces better learning outcomes. Ironically, if students are learning to achieve a good grade they probably will not earn as good a mark as when they learn to master the material. Are they learning to impress someone else or are they learning for their own development as a person? Do they view learning the course material as developing their own skills that will contribute to their ability to succeed in their desired vocation?

So the task of teaching to produce successful learning is to make it apparent to students that what they are learning has significance to the students themselves. This is part of what my learning philosophy study is trying to do for students. By having students' reflect on and articulate their own learning philosophy, my hope is that they will internalize their desire to learn and thus exert the effort to become engaged rather than passive learners. Of course, there are many factors which impact our motivation to accomplish tasks, but if we can design a venue that enables students to reflect on their own values and goals and place their current coursework in the context of those learning values and goals, then this may develop the internal motivation to master the course content. As instructors, we need to facilitate and nurture students' connection to what they are learning.


Ambrose SA, Bridges MW, DiPietro M, Lovett MC, Norman MK. 2010. How Learning Works: Seven Research-Based Principles for Smart Teaching. John Wiley & Sons, Inc. San Francisco, CA.
Chapter 3 - What Factors Motivate Students to Learn? pp 66-90.

Saturday, 18 February 2017

learning is impacted by how we mentally organize our knowledge

Last week our SoTL Teaching Circle met to discuss the 2nd chapter of How Learning Works. This chapter discusses the impact of how the way students organize their knowledge impacts their learning. Basically, this chapter considers the differences between how experts organize their knowledge vs how novices organize their knowledge. The bottom line is that expertise is accompanied by a greater network of mental connections among their nodes of knowledge. In contrast novices at best will have a linear connection that links their different points of knowing. Most students, however, will have islands of knowledge for which there are no connections between their different courses even when those courses are within the same major.

A few years ago Kimberly Tanner was the keynote speaker for a series of workshops at the UofA for the AIBA annual conference. The title of the conference was Mind the Gap which was meant to highlight the difference between thinking like an expert versus a novice. She explained that one of the issues that make it difficult for experts to teach novices is that much of our expertise is unarticulated to ourselves. Experts (e.g. holders of PhDs) are unaware of how they organize their knowledge that makes them an expert. Thus, it makes it difficult to help novices to transition to expert thinking because the experts do not know what the novice needs to change in order to become an expert. I know I have this difficulty when teaching many of my courses. Something that is obvious to me and thus not worth mentioning to my students ends up being critical for students to be made aware of in order to progress in the discipline. This is particularly true for those of us who suffer from academic fraud syndrome - that thinking that really, I am not that smart and someone is going to realize their mistake and revoke my PhD. Thus, university and college instructors may tend to keep some aspects of their expert thinking to themselves because to articulate that may reveal that what the expert thinks is worth teaching is actually common knowledge and inappropriate for discussion in the classrooms of higher education.

But like we tell many of our students, if you have a question, it is likely that many in the classroom have the same question. This is what makes teaching difficult - being courageous to be intellectually humble in the midst of both our peers and students.

On the other hand, the work that is being done to identify threshold concepts in different disciplines, I think is a good step toward understanding those key points that we ourselves grasped on our way to developing expertise. As instructors in higher education, we need to understand what those stumbling blocks were for us and our colleagues when developing our expertise. Once identified, we can then ensure that our own students know where to concentrate their attention in order to understand the depth and breadth of the discipline. And I think this can be readily facilitated by helping students make links within their own knowledge structure so that their mental models of our world becomes robust.

This is one of the reasons why I advocate for students to develop an e-portfolio that provides a platform for them to reflect on their education that cuts across disciplinary boundaries and even the boundaries between the courses within their major. Students need to understand that knowledge is a whole rather than a series of separate islands. We want our students to understand the world not just what is currently in front of their nose.


Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How does the way students organize knowledge affect their learning? In How Learning Works: Seven Research-Based Principles for Smart Teaching (pp. 40–65). San Francisco, CA: Jossey-Bass Publishers.

Haave, N. (2016). E-portfolios rescue biology students from a poorer final exam result: Promoting student metacognition. Bioscene: J College Biol Teaching, 42(1), 8–15.

Krieter, F. E., Julius, R. W., Tanner, K. D., Bush, S. D., & Scott, G. E. (2016). Thinking like a chemist: Development of a chemistry card-sorting task to probe conceptual expertise. J Chem Ed, 93(5), 811–820.

Loertscher, J., Green, D., Lewis, J. E., Lin, S., & Minderhout, V. (2014). Identification of threshold concepts for biochemistry. CBE-Life Sciences Education, 13(3), 516–528.

Smith, J. I., Combs, E. D., Nagami, P. H., Alto, V. M., Goh, H. G., Gourdet, M. A. A., … Tanner, K. D. (2013). Development of the biology card sorting task to measure conceptual expertise in biology. CBE-Life Sciences Education, 12(4), 628–644.

Tuesday, 7 February 2017

what students don't know can hurt them

This term four of my colleagues have formed a teaching circle to discuss SoTL. We are interested in understanding how to engage in SoTL research and to also use the SoTL literature to improve our own teaching praxis. For this term (Winter 2017) we decided to work through How Learning Works by Ambrose et al (2010). Last week we met to discuss the first chapter which considers how students' prior knowledge can affect their learning. The chapter makes a clear distinction between declarative and procedural knowledge but we noted that there are other types of knowledge such as the knowledge of application and context: The ability of people to know when or the correct context in which to apply their declarative or procedural knowledge. However, sometimes in this first chapter, it seemed that Ambrose et al were implying that procedural knowledge encompassed contextual knowledge. I think that greyness of my own understanding is apparent below.

People who can do but not explain how or why have procedural but not declarative knowledge and run the risk of being unable to apply a procedure within a new context or explain to someone else how to do the task. Many instructors are like this about their teaching being able to teach but not articulate why their teaching is effective nor are they able to teach as effectively in another context. Similarly, students may know facts (grammar, structures, species' names) but not know how to solve problems with that knowledge. These students have declarative but not procedural knowledge. This is what I am trying to move my second-year molecular cell biology students toward - being able to use their molecular cell biology knowledge to solve problems. The issue I have is that I do not know how to effectively teach procedural knowledge other than to have students practice and myself model problem-solving. Ambrose et al note that if students' prior knowledge is fragmentary or incorrect, this will interfere with their subsequent learning. In addition, even if their prior knowledge is sound, students are not always able to activate it in order to integrate it with new learning. Instructors need to be able to assess whether or not students' prior knowledge is sound and active for effective learning to occur. Ambrose et al also note that students need to be able to activate their knowledge appropriate to the learning context - this is not always the case. Thus, instructors need to monitor the appropriateness of students' knowledge and make clear the appropriate connections/knowledge for the context. Students need to learn contextual knowledge. For procedural knowledge I think this simply requires numerous examples and opportunities for practice.

This first chapter suggests that a good way to correct inaccurate knowledge is to give students an example or problem that exposes misconceptions and sets up cognitive dissonance in students' thinking.  Ambrose et al suggest using available concept inventories to probe students' inaccurate knowledge. This is what my physics colleague, Ian Blokland is doing in his classes which employ iClickers and what I am attempting to do with 4S apps in my classes which are taught using team-based learning. This a great idea but it is time-consuming work to produce plausible wrong answers/distractors. I have found that most textbook testbanks do not do this well.

Something the authors suggest and I attempt to do in my courses is to help students make connections in their learning from earlier in the same course, from previous prerequisite courses, and also from supporting courses I know they are taking at the same time. I have had students comment that they appreciate that I do this on the end of term student evaluations of teaching. In the language of Ambrose et al, I am activating students prior knowledge but acknowledging to students that this is an appropriate context in which to consider integrating that prior knowledge. Students are not always capable of doing this themselves.

Something else this first chapter suggests to activate prior knowledge that I attempt to do in my courses is to have students consider their own everyday context for how things work. The classic example that I often use is the increase in frequency in washroom trips after drinking alcohol is a direct result of alcohol inhibiting the release of ADH from the posterior pituitary. I consciously try to offer everyday examples of the applicability of students' new knowledge.

One example of being explicit about the context is the style of writing expected. In the biological sciences concise clear writing is necessary as opposed to a possible narrative in English - although a good science paper tells a good story....

Brian Rempel, my organic chemistry colleague highlighted a paper (Hawker et al 2016) at our teaching circle which investigated the Dunning-Kruger effect in a first-year general chemistry class. Generally speaking, students are poor at assessing how well they perform on exams after the exam has been written. As others have shown, better-performing students are generally better at assessing their performance. I think this is a case of you don't know what you don't know. If students do not know the material, then they are unable to assess whether or not they knew the answers on the exam - they thought they knew!

In the context of the first chapter of How Learning Works, it seems to me that students' prior knowledge impacts their ability to assess their performance on their exam. Is there a link? I guess the point in this chapter is that students do not always know what they don't know and this impacts their ability to integrate new knowledge and to assess how to apply that knowledge. 

What is interesting in the Hawker et al (2016) study is that there is a significant improvement in postdiction accuracy between the first exam written and the second exam but not in subsequent exams (in this study there were five with the 5th being the final comprehensive exam). The authors of this study suggest that the first exam is an abrupt corrective to students' expectations of what is expected of students on a university exam (this is a first term general chem course). Thus it may be that the effect is not specific to chemistry but may simply be a result of students' transition to university. Their analysis of first-time chemistry students in the second semester found the same significant difference between the first two exams for university chemistry neophytes but not for students who had completed the first chem course. So there may be something about general chem in particular that prompted the authors to study this in the first place: there is some suggestion in the SoTL literature that chemistry is different in terms of students' ability to monitor their performance. They suggest that this may be due to the difficult nature of chemistry. Other STEM disciplines have reported similar results (Ainscough et al 2016; Lindsey & Nagel 2015).


Ainscough, L., Foulis, E., Colthorpe, K., Zimbardi, K., Robertson-Dean, M., Chunduri, P., & Lluka, L. (2016). Changes in Biology Self-Efficacy during a First-Year University Course. CBE-Life Sciences Education, 15(2), ar19.

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., & Norman, M. K. (2010). How does students’ prior knowledge affect their learning? In How learning works: Seven research-based principles for smart teaching (pp. 10–39). San Francisco, CA: Jossey-Bass, an imprint of Wiley.

Hawker, M. J., Dysleski, L., & Rickey, D. (2016). Investigating general chemistry students’ metacognitive monitoring of their exam performance by measuring postdiction accuracies over time. Journal of Chemical Education, 93(5), 832–840.

Lindsey, B. A., & Nagel, M. L. (2015). Do students know what they know? Exploring the accuracy of students’ self-assessments. Phys. Rev. ST Phys. Educ. Res., 11(2), 20103. 

Saturday, 28 January 2017

the advantages of stable teams

Two recently published articles (Walker et al 2017, Zhang et al 2017) provide evidence that the Team-Based Learning (TBL) practice of keeping learning teams stable throughout the course produces improved student learning outcomes than if the teams are made ad hoc each time group work occurs during class.

The study by Walker et al (2017) is from Kentucky. I like that this study does not spend time on establishing whether or not cooperative learning works but simply cites the existing evidence. Instead, this study is focusing solely on the impact of stable vs shifting teams on the efficacy of cooperative learning. Their results suggest that stable teams are more effective in producing better learning outcomes. The student population being studied was a freshman undergraduate sociology course. What is interesting is that it is not only the stability of the team that produces improved student learning outcomes but also the time on task similar to what the study from Mazur found below to explain why females had greater gains than males. In the Walker et al (2017) study, there were no differences in the first term comparing stable vs shifting teams. There was, however, a significant difference in the second semester when time spent discussing material in teams was increased (the amount of time viewing a film was reduced). Note that although this study involved large enrollment classes (150-175 students) the study was conducted in the tutorials (recitation sessions) that were smaller subsets of the class lead by teaching assistants (TA) rather than faculty. Also, note that the one TA choose to shift teams due to pedagogical beliefs that all students deserved to have a chance to work in a high-functioning team. In contrast, the other TA created stable teams based on their reading of the TBL literature which suggests that stability develops stronger relationships which enhance the learning environment.  I have some issues with the introduction in the Walker et al paper (2017): they make many blanket statements about the typical university student experience (large classes, relatively unengaged) without citing any evidence that this is in fact, the case. I am sure it is the case, but in a peer-reviewed publication, I expect the evidence to be cited that this is true.

Mazur's paper studied the effect of peer instruction (PI) on science students' beliefs in physics and towards learning physics. The effects of a stable team environment in the PI groups was also investigated. Students' attitudes were measured using the Colorado Learning Attitudes Toward Science Survey. The students were at a university in China. The results indicate that PI improved students' attitudes and that this increase was greater when the PI teams were stable throughout the term. It seems to me that the study was undertaken in order to determine why many studies indicate that students' attitudes toward physics in undergraduate physics courses deteriorate becoming more novice like. This is similar to what I have seen in my 1st-year biology course with the Learning Environment Preferences survey (paper in preparation) when students are not assigned the task of developing their learning philosophy. I found Mazur's results interesting because, in both of my courses, PI instruction was occurring in the form of TBL. In the class in which a learning philosophy was not assigned students' cognitive complexity index decreased (becoming more novice like). Zhang et al (2017) also determined a gender effect in which females seemed to make greater gains than males in the PI courses. Further study suggested that this may be a result of females discussing to a greater extent during team discussions of the in-class questions set by the instructor. The only criticism of the study that I have is that in the variable team PI group, the researchers are assuming that the students formed teams randomly during each class. However, it is possible that students may sit in the same place in class from day to day and sit with their friends. Thus, although team stability was not enforced, neither was team variability. 

Regardless, these are interesting results and provide evidence for what many TBL practitioners have observed in their courses: Over time, stable learning teams become more effective at learning.


Michaelsen, L. K., Watson, W. E., & Black, R. H. (1989). A realistic test of individual versus group consensus decision making. Journal of Applied Psychology, 74(5), 834–839.

Sibley J. 2016. Using Teams Properly. LearnTBL.

Walker, A., Bush, A., Sanchagrin, K., & Holland, J. (2017). “We’ve Got to Keep Meeting Like This”: A Pilot Study Comparing Academic Performance in Shifting-Membership Cooperative Groups Versus Stable-Membership Cooperative Groups in an Introductory-Level Lab. College Teaching, 65(1), 9–16.

Zhang, P., Ding, L., & Mazur, E. (2017). Peer Instruction in introductory physics: A method to bring about positive changes in students’ attitudes and beliefs. Physical Review Physics Education Research, 113(1), 10104.

Friday, 20 January 2017

pre-testing and expectations in team-based learning

Last Friday I gave my 2nd-year biochemistry class its first readiness assurance test or RAT in team-based learning (TBL) terminology. It was a mix of new material (amino acids) and material they should have learned from their pre-requisite chemistry courses on pH and buffers. Typically, TBL RATs aim to be reading quizzes to encourage students to prepare for practicing to use their newly learned knowledge in class in subsequent classes in what TBL terms Apps (for applications). The design of RATs aims to produce a class average of 70%. My average last week was 49%.

So, what happened? Interestingly, when I analyzed the marks, it appears to me that students did better on the new material: recognizing amino acid structure and calculating pI of amino acids. What students had the most difficulty with was their understanding of their prior learning on pH and buffers. This surprised me and so I checked with my chemistry colleagues and they suggested that my questions on pH and buffers were more conceptual and that first-year chemistry courses tend to focus on calculating pH. In addition, my chemistry colleagues reminded me that our students prefer to plug and play (calculate) rather than think. I don't think our campus is unusual in this regard - thinking is difficult work!

But it does raise an interesting issue for the implementation of TBL as a teaching and learning strategy. My understanding of TBL practice suggests that RATs should focus more on the conceptual than on calculation style questions. As a result, this should promote discussion of the questions during the team phase of the two-stage style of testing that is inherent in RATs. In readiness assurance tests, students first complete the test (10 MCQs in my classes) individually and then repeat the same test as a team using IF-AT cards so that students receive immediate feedback about their understanding. It is great at immediately revealing misconceptions. I have been using this technique since 2011 and can attest that it works well.

However, there does seem to be an apparent tension in the Readiness Assurance Process of Team-Based Learning. The RAP is what makes TBL a flipped classroom teaching strategy. In the RAP, students are assigned pages to read from the textbook (or article, or podcast, or whatever students need to initially prepare themselves to learn in class), and then during the first class of the course module/topic, students are administered a RAT in the two-stage testing style I described above. The RAP is intended to encourage students to do their pre-class preparation and hold them accountable for that preparation. It is not intended to be the end of teaching and learning for the particular course module, but rather is supposed to mark the beginning of teaching and learning. Thus, the RAT should be considered to be, in essence, a reading quiz. It is suggested in the TBL literature that a typical RAT could be constructed based on the topic headings and subheading in the assigned textbook chapter. However, the TBL literature also suggests that the questions should generate discussion and debate during the team portion of the RAT. What I have difficulty in implementing TBL in my classes, is that there seems to be a tension between producing a RAT that is a reading quiz vs producing a RAT that generates discussion and debate. Typically, a reading quiz is designed fairly low on Bloom's taxonomy of learning (mostly questions testing recall). In contrast, questions that foster debate and discussion need to move beyond simple right/wrong answers. Hence, it seems that there is a tension inherent in the design of RATs: they should be designed as reading quizzes that are able to foster debate.

I have a hard time constructing these sorts of tests and I believe that is what produced the poor class average on my first RAT of the term in my biochemistry class last week. What I thought were simple recall questions based upon what students had learned in prior courses, ended up exposing some fundamental misconception about their learning. I guess that is what RATs are supposed to do. I was just surprised how many misconceptions students had about pH and buffers given they have been learning this since high school. On the other hand, if you don't use it, you lose it. And I suspect that for many of my biochemistry students they have not had to consider pH and buffers for over a year or two.

The way I handled the situation is that I made the RAT out of 9 instead of 10 (there was one question that no one answered correctly - a couple of teams got it correct on their second attempt) and I have also informed students that I will not include their lowest RAT result when I calculate their final grade for the course. Hopefully, that is sufficient to press the reset button so that students feel like they are not stumbling right out of the gate in the first week of classes.


Dihoff, R., Brosvic, G. M., ML, M. L. E., & Cook, M. J. (2004). Provision of feedback during preparation for academic testing: learning is enhanced by immediate but not delayed feedback. The Psychological Record, 54(2), 207–231.

Haide, P., Kubitz, K., & McCormack, W. T. (2014). Analysis of the team-based learning literature: TBL comes of age. Journal on Excellence in College Teaching, 25(3&4), 303–333.

Metoyer, S. K., Miller, S. T., Mount, J., & Westmoreland, S. L. (2014). Examples from the trenches: Improving student learning in the sciences using team-based learning. Journal of College Science Teaching, 43(5), 40–47.

Thursday, 12 January 2017

the influence of TBL in my teaching

Last term was a gong show for me. Not that things didn't go well - they did go well. I simply chose to implement or tweak too many things in my courses. Thus the reason for so few posts (two!?) last term. In the Fall term, I taught three courses: a 4th-year course (History & Theory of Biology), a third-year course (Biochemistry: Intermediary Metabolism), and a second-year course (Molecular Cell Biology).

The history and theory course I have been teaching since the late-1990s and it chugs along just fine. I have always taught this course with the students taking an active role in the teaching of the course. I hadn't realized when I began teaching it in 1998 that I was trying to implement active learning. In this course, students are assigned journal articles from the history and philosophy of biology and are required to write a two-page double-spaced response to the particular day's article in preparation for class. In addition, a student is designated as the seminar leader and leads the initial portion of the class in a consideration of the implications of the article in light of what has been discussed prior in the course and also in terms of their own experience with biology in their previous three years of our biology program. The remaining half of each class consists of me mopping up the discussion and ensuring that what I consider to be the salient connections are discussed by the entire class.

This worked ok for a few years until the class began to grow in size from an initial enrollment in the 1990s of five or six students to now typically 18-22 students. One of the things I found was that the student-led seminars became really boring for the class because student seminar leaders were simply presenting what students had already read. So in the mid-2000s I began asking student seminar leaders to direct a class conversation rather than doing a formal presentation. This worked until the class became larger than 15 students. At that point, it became difficult for students to manage the class conversation.

A few years ago, I began implementing Team-Based Learning in my courses and this experience influenced the structure of my history and theory course. What I learned from implementing TBL in other courses is that student conversations work well in groups of 4-7. Smaller or larger than that and the conversation suffers: students are either too shy or there are too many voices. So, in the 2010s I began splitting my classes into groups for the student-lead seminars. After a couple of iterations, I realized that it is most effective if the teams are stable throughout the term. This is such a simple tweak with its effectiveness established in the TBL literature and I really don't understand why I didn't start doing that sooner. This made a huge difference in the quality of the student-led conversations resulting from students being more comfortable with their team-mates and also as a result of the peer-pressure to produce a good seminar for team-mates. In addition, the stress of leading a seminar diminished because it was a presentation to the team rather than to the entire class.

I have not completely implemented the TBL structure into this course: it does not have RATs or formal Apps. But it follows the spirit of how a TBL course is delivered: The teams are randomly constructed by me transparently with the students on the first day of class; although there are no RATs, students are held accountable for their pre-class preparation through the required written responses to the assigned reading; although there are no formal Apps in the TBL sense, I do have students consider my questions after the student-lead seminars to ensure that what I consider to be the salient points are raised for students before the end of the class.

My friend and colleague Paula Marentette who also uses TBL in her classes and was one of the people who suggested that I try implementing TBL in my own courses; she explained to me a few years ago that for her, implementing TBL in her courses transformed her approach to teaching such that even now when she teaches a course without TBL, she finds that she still uses elements of TBL in all of her classes. I find the same to be happening with me. For many people, TBL is too constraining for them. For me, I have found it to be a great structure in which to begin implementing active learning and learner-centered teaching in my courses. As these approaches to teaching and learning have soaked into by being, I am finding that I may no longer need to formally implement TBL in my courses and instead pick and choose those elements to use when the need arises for my students' learning.


Haave, N. (2014). Team-based learning: A high-impact educational strategy. National Teaching and Learning Forum, 23(4), 1–5.

Farland, M. Z., Sicat, B. L., Franks, A. S., Pater, K. S., Medina, M. S., & Persky, A. M. (2013). Best Practices for Implementing Team-Based Learning in Pharmacy Education. American Journal of Pharmaceutical Education, 77(8), 177.

Wieman, C. E. (2014). Large-scale comparison of science teaching methods sends clear message. Proceedings of the National Academy of Sciences of the United States of America, 111(23), 8319–20.

Weimer, M. (2013). Learner-centered teaching: Roots and origins. In Learner-Centered Teaching: Five Key Changes to Practice (2nd ed., pp. 3–27). San Francisco, CA: Jossey-Bass, a Wiley imprint.