Methodology

All students in both course sections consented to participate in our research project, with none opting-out. As planned, we used two data sets to conduct our analysis: surveys and final assignments.

Overview of Students and Our Data Sets

Both Section 3: Single Mode Section 4: Multimode
Students in the course 50 32 28
Completed Survey 49 24 25
Submitted Final Assignment 50 28 27
Course total grade ranges As: 13

Bs: 6

Cs: 6

Ds: 3

Fs: 5

As: 14

Bs: 5

Cs: 6

Ds: 1

Fs: 2

Stage 2, Research average grade 65% 68%
Stage 3, Average grade Prep Notes: 58% Online debate: 74%

 

Stage 5, Final Assignment average grade 65% 71%

Survey Data

This dataset was straightforward to use. We used the survey tool in Moodle, the Learning Management System. Students were required to complete the survey but not graded on it and it was anonymous. All but one student completed the survey.

Final Assignment Data

This dataset was comprised of work students submitted as the last stage of the debates activity-assessment. Five students did not submit a final assignment. To analyze this data, we identified an assessment tool for our learning outcome of critical thinking, independently assessed student work, collated our results and compared the two sections. Throughout we embedded inter-rater reliability measures.

Assessment Tool for Critical Thinking

We adapted a rubric assessment tool from TRU’s Institutional Learning Outcome Rubric for Critical Thinking and Investigation, created by the TRU Centre for Excellence in Learning and Teaching. We selected four (of seven) foci as relevant to the debates activity-assessment:

  • Critical and Creative Exploration
  • Critical Interpretation
  • Critical and Creative Engagement
  • Critical Reflection

The rubric is designed to support students in first to fourth year of their undergraduate degree. The four levels of achievement are:

  1. Beginning
  2. Approaching
  3. Meeting
  4. Exceeding

First year students are expected to achieve the first level “Beginning” or second level of “Approaching” critical thinking. The third and fourth levels are expected at degree completion.

2023May_Adapted_Rubric_CT_ApproachingLevel

We then designed Assessor Rating Sheets to document our assessments.

SECTION 3: 2023June27_Blank_Assessor Rating Shee_SECTION3

SECTION 4: 2023June27_Blank_Assessor Rating Shee_SECTION4

Process

STAGE 1

Lindsey, Kenya and Marie used our rubric and Assessor Rating Sheets to independently assess student assignments. Before proceeding, we met on June 27, 2023 to collectively assess one student to increase interrater dependability. We agreed on our interpretation of the foci. We also agreed to include the entire student assignment in our assessment of each foci.

We independently assessed 24 student assignments for four evidence of critical thinking using the four foci in the rubric. We assessed 12 randomly selected students in each section, grouping them into sets by letter grades A, B, and C achieved in the course. At the end of our work, each student had 3 assessments of the four foci (one by each of us).

  • A students – Section 3: 4 A students; Section 4: 4 A students
  • B students – Section 3: 4 B students and Section 4: 4 B students
  • C students – Section 3: 4 C students and Section 4: 4 C students

STAGE 2

We meet on August 16, 2023 to compare assessments. We used the 1-4 achievement scale (1=beginning, 2=approaching, 3=meeting, 4=exceeding) to score achievement level for each foci. We summed a total score for each student, ranging from 4 to 16. We also analyzed each student and section by student grade group and section.

Simple addition of our collective scores, with higher scores indicating a higher achievement of critical thinking, enabled us to make comparisons especially of Section 3 Single Mode and Section 4 Multimode.

Reliability

Quantification enabled us to quickly view agreement and differences in our rating of student achievement of critical thinking for each foci. We found general agreement on assessment ratings.

Section 3 level of agreement (out of 46)

  • Exactly the same assessment = 10
  • One-point difference in assessment = 29
  • Two-point difference assessment (by one of us) = 7 (across all four foci)

Section 4 level of agreement (out of 46)

  • Exactly the same assessment = 6
  • One-point difference in assessment = 29
  • Two-point difference assessment (by one of us) = 11 (most in the 3rd and 4th foci)

We discussed several foci with two-point differences in assessment, re-visiting student work and reviewing the reason we noted for our assessments. Overall, Marie had higher ratings than Lindsey and Kenya. This may be because Lindsey and Kenya are sociologists whereas Marie is not. We decided that our level of agreement was sufficient.

 

References

Golafshani, N. (2003). Understanding reliability and validity in qualitative research. The Qualitative Report, 8 (4). 597-607.​

Hoessler, C. & Hoare, A. (2022). Strategic assessment of institutional learning: Practitioner handbook. https://sail.pressbooks.tru.ca/ (See in particular: Chapter 7 and Resources and Templates)

 

Peeters, M., Beltyukova, S., & Martin, B. (2013). Educational Testing and Validity of Conclusions in the Scholarship of Teaching and Learning. American Journal of Pharmaceutical Education, 77, 186. https://doi.org/10.5688/ajpe779186

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Multimodal Debates Copyright © by McKay, Lindsey, Bartlett, Marie is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book