Saturday 28 January 2017

the advantages of stable teams

Two recently published articles (Walker et al 2017, Zhang et al 2017) provide evidence that the Team-Based Learning (TBL) practice of keeping learning teams stable throughout the course produces improved student learning outcomes than if the teams are made ad hoc each time group work occurs during class.

The study by Walker et al (2017) is from Kentucky. I like that this study does not spend time on establishing whether or not cooperative learning works but simply cites the existing evidence. Instead, this study is focusing solely on the impact of stable vs shifting teams on the efficacy of cooperative learning. Their results suggest that stable teams are more effective in producing better learning outcomes. The student population being studied was a freshman undergraduate sociology course. What is interesting is that it is not only the stability of the team that produces improved student learning outcomes but also the time on task similar to what the study from Mazur found below to explain why females had greater gains than males. In the Walker et al (2017) study, there were no differences in the first term comparing stable vs shifting teams. There was, however, a significant difference in the second semester when time spent discussing material in teams was increased (the amount of time viewing a film was reduced). Note that although this study involved large enrollment classes (150-175 students) the study was conducted in the tutorials (recitation sessions) that were smaller subsets of the class lead by teaching assistants (TA) rather than faculty. Also, note that the one TA choose to shift teams due to pedagogical beliefs that all students deserved to have a chance to work in a high-functioning team. In contrast, the other TA created stable teams based on their reading of the TBL literature which suggests that stability develops stronger relationships which enhance the learning environment.  I have some issues with the introduction in the Walker et al paper (2017): they make many blanket statements about the typical university student experience (large classes, relatively unengaged) without citing any evidence that this is in fact, the case. I am sure it is the case, but in a peer-reviewed publication, I expect the evidence to be cited that this is true.

Mazur's paper studied the effect of peer instruction (PI) on science students' beliefs in physics and towards learning physics. The effects of a stable team environment in the PI groups was also investigated. Students' attitudes were measured using the Colorado Learning Attitudes Toward Science Survey. The students were at a university in China. The results indicate that PI improved students' attitudes and that this increase was greater when the PI teams were stable throughout the term. It seems to me that the study was undertaken in order to determine why many studies indicate that students' attitudes toward physics in undergraduate physics courses deteriorate becoming more novice like. This is similar to what I have seen in my 1st-year biology course with the Learning Environment Preferences survey (paper in preparation) when students are not assigned the task of developing their learning philosophy. I found Mazur's results interesting because, in both of my courses, PI instruction was occurring in the form of TBL. In the class in which a learning philosophy was not assigned students' cognitive complexity index decreased (becoming more novice like). Zhang et al (2017) also determined a gender effect in which females seemed to make greater gains than males in the PI courses. Further study suggested that this may be a result of females discussing to a greater extent during team discussions of the in-class questions set by the instructor. The only criticism of the study that I have is that in the variable team PI group, the researchers are assuming that the students formed teams randomly during each class. However, it is possible that students may sit in the same place in class from day to day and sit with their friends. Thus, although team stability was not enforced, neither was team variability. 

Regardless, these are interesting results and provide evidence for what many TBL practitioners have observed in their courses: Over time, stable learning teams become more effective at learning.

Resources


Michaelsen, L. K., Watson, W. E., & Black, R. H. (1989). A realistic test of individual versus group consensus decision making. Journal of Applied Psychology, 74(5), 834–839.

Sibley J. 2016. Using Teams Properly. LearnTBL.

Walker, A., Bush, A., Sanchagrin, K., & Holland, J. (2017). “We’ve Got to Keep Meeting Like This”: A Pilot Study Comparing Academic Performance in Shifting-Membership Cooperative Groups Versus Stable-Membership Cooperative Groups in an Introductory-Level Lab. College Teaching, 65(1), 9–16.

Zhang, P., Ding, L., & Mazur, E. (2017). Peer Instruction in introductory physics: A method to bring about positive changes in students’ attitudes and beliefs. Physical Review Physics Education Research, 113(1), 10104.

Friday 20 January 2017

pre-testing and expectations in team-based learning

Last Friday I gave my 2nd-year biochemistry class its first readiness assurance test or RAT in team-based learning (TBL) terminology. It was a mix of new material (amino acids) and material they should have learned from their pre-requisite chemistry courses on pH and buffers. Typically, TBL RATs aim to be reading quizzes to encourage students to prepare for practicing to use their newly learned knowledge in class in subsequent classes in what TBL terms Apps (for applications). The design of RATs aims to produce a class average of 70%. My average last week was 49%.

So, what happened? Interestingly, when I analyzed the marks, it appears to me that students did better on the new material: recognizing amino acid structure and calculating pI of amino acids. What students had the most difficulty with was their understanding of their prior learning on pH and buffers. This surprised me and so I checked with my chemistry colleagues and they suggested that my questions on pH and buffers were more conceptual and that first-year chemistry courses tend to focus on calculating pH. In addition, my chemistry colleagues reminded me that our students prefer to plug and play (calculate) rather than think. I don't think our campus is unusual in this regard - thinking is difficult work!

But it does raise an interesting issue for the implementation of TBL as a teaching and learning strategy. My understanding of TBL practice suggests that RATs should focus more on the conceptual than on calculation style questions. As a result, this should promote discussion of the questions during the team phase of the two-stage style of testing that is inherent in RATs. In readiness assurance tests, students first complete the test (10 MCQs in my classes) individually and then repeat the same test as a team using IF-AT cards so that students receive immediate feedback about their understanding. It is great at immediately revealing misconceptions. I have been using this technique since 2011 and can attest that it works well.

However, there does seem to be an apparent tension in the Readiness Assurance Process of Team-Based Learning. The RAP is what makes TBL a flipped classroom teaching strategy. In the RAP, students are assigned pages to read from the textbook (or article, or podcast, or whatever students need to initially prepare themselves to learn in class), and then during the first class of the course module/topic, students are administered a RAT in the two-stage testing style I described above. The RAP is intended to encourage students to do their pre-class preparation and hold them accountable for that preparation. It is not intended to be the end of teaching and learning for the particular course module, but rather is supposed to mark the beginning of teaching and learning. Thus, the RAT should be considered to be, in essence, a reading quiz. It is suggested in the TBL literature that a typical RAT could be constructed based on the topic headings and subheading in the assigned textbook chapter. However, the TBL literature also suggests that the questions should generate discussion and debate during the team portion of the RAT. What I have difficulty in implementing TBL in my classes, is that there seems to be a tension between producing a RAT that is a reading quiz vs producing a RAT that generates discussion and debate. Typically, a reading quiz is designed fairly low on Bloom's taxonomy of learning (mostly questions testing recall). In contrast, questions that foster debate and discussion need to move beyond simple right/wrong answers. Hence, it seems that there is a tension inherent in the design of RATs: they should be designed as reading quizzes that are able to foster debate.

I have a hard time constructing these sorts of tests and I believe that is what produced the poor class average on my first RAT of the term in my biochemistry class last week. What I thought were simple recall questions based upon what students had learned in prior courses, ended up exposing some fundamental misconception about their learning. I guess that is what RATs are supposed to do. I was just surprised how many misconceptions students had about pH and buffers given they have been learning this since high school. On the other hand, if you don't use it, you lose it. And I suspect that for many of my biochemistry students they have not had to consider pH and buffers for over a year or two.

The way I handled the situation is that I made the RAT out of 9 instead of 10 (there was one question that no one answered correctly - a couple of teams got it correct on their second attempt) and I have also informed students that I will not include their lowest RAT result when I calculate their final grade for the course. Hopefully, that is sufficient to press the reset button so that students feel like they are not stumbling right out of the gate in the first week of classes.

Resources

Dihoff, R., Brosvic, G. M., ML, M. L. E., & Cook, M. J. (2004). Provision of feedback during preparation for academic testing: learning is enhanced by immediate but not delayed feedback. The Psychological Record, 54(2), 207–231.

Haide, P., Kubitz, K., & McCormack, W. T. (2014). Analysis of the team-based learning literature: TBL comes of age. Journal on Excellence in College Teaching, 25(3&4), 303–333.

Metoyer, S. K., Miller, S. T., Mount, J., & Westmoreland, S. L. (2014). Examples from the trenches: Improving student learning in the sciences using team-based learning. Journal of College Science Teaching, 43(5), 40–47.

Thursday 12 January 2017

the influence of TBL in my teaching

Last term was a gong show for me. Not that things didn't go well - they did go well. I simply chose to implement or tweak too many things in my courses. Thus the reason for so few posts (two!?) last term. In the Fall term, I taught three courses: a 4th-year course (History & Theory of Biology), a third-year course (Biochemistry: Intermediary Metabolism), and a second-year course (Molecular Cell Biology).

The history and theory course I have been teaching since the late-1990s and it chugs along just fine. I have always taught this course with the students taking an active role in the teaching of the course. I hadn't realized when I began teaching it in 1998 that I was trying to implement active learning. In this course, students are assigned journal articles from the history and philosophy of biology and are required to write a two-page double-spaced response to the particular day's article in preparation for class. In addition, a student is designated as the seminar leader and leads the initial portion of the class in a consideration of the implications of the article in light of what has been discussed prior in the course and also in terms of their own experience with biology in their previous three years of our biology program. The remaining half of each class consists of me mopping up the discussion and ensuring that what I consider to be the salient connections are discussed by the entire class.

This worked ok for a few years until the class began to grow in size from an initial enrollment in the 1990s of five or six students to now typically 18-22 students. One of the things I found was that the student-led seminars became really boring for the class because student seminar leaders were simply presenting what students had already read. So in the mid-2000s I began asking student seminar leaders to direct a class conversation rather than doing a formal presentation. This worked until the class became larger than 15 students. At that point, it became difficult for students to manage the class conversation.

A few years ago, I began implementing Team-Based Learning in my courses and this experience influenced the structure of my history and theory course. What I learned from implementing TBL in other courses is that student conversations work well in groups of 4-7. Smaller or larger than that and the conversation suffers: students are either too shy or there are too many voices. So, in the 2010s I began splitting my classes into groups for the student-lead seminars. After a couple of iterations, I realized that it is most effective if the teams are stable throughout the term. This is such a simple tweak with its effectiveness established in the TBL literature and I really don't understand why I didn't start doing that sooner. This made a huge difference in the quality of the student-led conversations resulting from students being more comfortable with their team-mates and also as a result of the peer-pressure to produce a good seminar for team-mates. In addition, the stress of leading a seminar diminished because it was a presentation to the team rather than to the entire class.

I have not completely implemented the TBL structure into this course: it does not have RATs or formal Apps. But it follows the spirit of how a TBL course is delivered: The teams are randomly constructed by me transparently with the students on the first day of class; although there are no RATs, students are held accountable for their pre-class preparation through the required written responses to the assigned reading; although there are no formal Apps in the TBL sense, I do have students consider my questions after the student-lead seminars to ensure that what I consider to be the salient points are raised for students before the end of the class.

My friend and colleague Paula Marentette who also uses TBL in her classes and was one of the people who suggested that I try implementing TBL in my own courses; she explained to me a few years ago that for her, implementing TBL in her courses transformed her approach to teaching such that even now when she teaches a course without TBL, she finds that she still uses elements of TBL in all of her classes. I find the same to be happening with me. For many people, TBL is too constraining for them. For me, I have found it to be a great structure in which to begin implementing active learning and learner-centered teaching in my courses. As these approaches to teaching and learning have soaked into by being, I am finding that I may no longer need to formally implement TBL in my courses and instead pick and choose those elements to use when the need arises for my students' learning.

Resources

Haave, N. (2014). Team-based learning: A high-impact educational strategy. National Teaching and Learning Forum, 23(4), 1–5.

Farland, M. Z., Sicat, B. L., Franks, A. S., Pater, K. S., Medina, M. S., & Persky, A. M. (2013). Best Practices for Implementing Team-Based Learning in Pharmacy Education. American Journal of Pharmaceutical Education, 77(8), 177.

Wieman, C. E. (2014). Large-scale comparison of science teaching methods sends clear message. Proceedings of the National Academy of Sciences of the United States of America, 111(23), 8319–20.

Weimer, M. (2013). Learner-centered teaching: Roots and origins. In Learner-Centered Teaching: Five Key Changes to Practice (2nd ed., pp. 3–27). San Francisco, CA: Jossey-Bass, a Wiley imprint.