Friday, 20 January 2017

pre-testing and expectations in team-based learning

Last Friday I gave my 2nd-year biochemistry class its first readiness assurance test or RAT in team-based learning (TBL) terminology. It was a mix of new material (amino acids) and material they should have learned from their pre-requisite chemistry courses on pH and buffers. Typically, TBL RATs aim to be reading quizzes to encourage students to prepare for practicing to use their newly learned knowledge in class in subsequent classes in what TBL terms Apps (for applications). The design of RATs aims to produce a class average of 70%. My average last week was 49%.

So, what happened? Interestingly, when I analyzed the marks, it appears to me that students did better on the new material: recognizing amino acid structure and calculating pI of amino acids. What students had the most difficulty with was their understanding of their prior learning on pH and buffers. This surprised me and so I checked with my chemistry colleagues and they suggested that my questions on pH and buffers were more conceptual and that first-year chemistry courses tend to focus on calculating pH. In addition, my chemistry colleagues reminded me that our students prefer to plug and play (calculate) rather than think. I don't think our campus is unusual in this regard - thinking is difficult work!

But it does raise an interesting issue for the implementation of TBL as a teaching and learning strategy. My understanding of TBL practice suggests that RATs should focus more on the conceptual than on calculation style questions. As a result, this should promote discussion of the questions during the team phase of the two-stage style of testing that is inherent in RATs. In readiness assurance tests, students first complete the test (10 MCQs in my classes) individually and then repeat the same test as a team using IF-AT cards so that students receive immediate feedback about their understanding. It is great at immediately revealing misconceptions. I have been using this technique since 2011 and can attest that it works well.

However, there does seem to be an apparent tension in the Readiness Assurance Process of Team-Based Learning. The RAP is what makes TBL a flipped classroom teaching strategy. In the RAP, students are assigned pages to read from the textbook (or article, or podcast, or whatever students need to initially prepare themselves to learn in class), and then during the first class of the course module/topic, students are administered a RAT in the two-stage testing style I described above. The RAP is intended to encourage students to do their pre-class preparation and hold them accountable for that preparation. It is not intended to be the end of teaching and learning for the particular course module, but rather is supposed to mark the beginning of teaching and learning. Thus, the RAT should be considered to be, in essence, a reading quiz. It is suggested in the TBL literature that a typical RAT could be constructed based on the topic headings and subheading in the assigned textbook chapter. However, the TBL literature also suggests that the questions should generate discussion and debate during the team portion of the RAT. What I have difficulty in implementing TBL in my classes, is that there seems to be a tension between producing a RAT that is a reading quiz vs producing a RAT that generates discussion and debate. Typically, a reading quiz is designed fairly low on Bloom's taxonomy of learning (mostly questions testing recall). In contrast, questions that foster debate and discussion need to move beyond simple right/wrong answers. Hence, it seems that there is a tension inherent in the design of RATs: they should be designed as reading quizzes that are able to foster debate.

I have a hard time constructing these sorts of tests and I believe that is what produced the poor class average on my first RAT of the term in my biochemistry class last week. What I thought were simple recall questions based upon what students had learned in prior courses, ended up exposing some fundamental misconception about their learning. I guess that is what RATs are supposed to do. I was just surprised how many misconceptions students had about pH and buffers given they have been learning this since high school. On the other hand, if you don't use it, you lose it. And I suspect that for many of my biochemistry students they have not had to consider pH and buffers for over a year or two.

The way I handled the situation is that I made the RAT out of 9 instead of 10 (there was one question that no one answered correctly - a couple of teams got it correct on their second attempt) and I have also informed students that I will not include their lowest RAT result when I calculate their final grade for the course. Hopefully, that is sufficient to press the reset button so that students feel like they are not stumbling right out of the gate in the first week of classes.

Resources

Dihoff, R., Brosvic, G. M., ML, M. L. E., & Cook, M. J. (2004). Provision of feedback during preparation for academic testing: learning is enhanced by immediate but not delayed feedback. The Psychological Record, 54(2), 207–231.

Haide, P., Kubitz, K., & McCormack, W. T. (2014). Analysis of the team-based learning literature: TBL comes of age. Journal on Excellence in College Teaching, 25(3&4), 303–333.

Metoyer, S. K., Miller, S. T., Mount, J., & Westmoreland, S. L. (2014). Examples from the trenches: Improving student learning in the sciences using team-based learning. Journal of College Science Teaching, 43(5), 40–47.