Zum Hauptinhalt springen

Use of a Competency Framework to Explore the Benefits of Student-Generated Multiple-Choice Questions (MCQs) on Student Engagement

Yeong, Foong May ; Chin, Cheen Fei ; et al.
In: Pedagogies: An International Journal, Jg. 15 (2020), Heft 2, S. 83-105
Online academicJournal

Use of a competency framework to explore the benefits of student-generated multiple-choice questions (MCQs) on student engagement 

Student engagement in large Life Sciences classes can be problematic, especially with the course work done outside formal class contact hours. To enhance student engagement with the content outside class time, we designed an assignment spanning one semester that required students to author MCQs. We used Bloom's taxonomy to evaluate the MCQs. Additionally, we derived a three-level framework to analyse the demands on the student question-setters by determining the competencies required to construct the MCQs. This two tier analysis of MCQs allowed us to gauge the level of student engagement with course materials. The three-level competency framework referred to students' foundational domain knowledge at level 1 to application and prediction of cellular functions in normal and abnormal situations, within a topic at level 2 and across different topics at level 3. From 40 sample MCQs, slightly over 50% targeted mid- to high-level Bloom's taxonomy. Slightly under 50% of the questions required attainment of level 2 and 3 competencies for construction. However, we noted a high level of academic engagement and some level of cognitive engagement among several students which are consistent with self-reports in an anonymous student survey conducted after the semester. We suggest that using a competency framework to analyse student-authored MCQs can make explicit students' efforts at constructing MCQs.

Keywords: Undergraduate; competency levels; student engagement; student-generated questions

Introduction

Large-class teaching of biology at the undergraduate level is in need of more active-learning approaches other than relying on purely didactic teaching in lecture theatres. Didactic teaching, though efficient at disseminating information is largely passive and can possibly lead to lower engagement of students in learning (Wood, [36]). It is likely that lower students' engagement is linked to lower student satisfaction with their learning experiences and this might result in poor academic performance.

In our large-class Life Sciences undergraduates in a Cell Biology module, we designed a generative-learning assignment, involving student authoring MCQs as an activity to extend students' engagement outside formal curricular time. The assignment of authoring MCQs required students to be actively thinking about the content and applying them to construct the questions as well as the distractors. Authoring MCQs as an activity showed the components of generative learning in (1) providing a context that generative a motivation for students, (2) allowing students to learn and apply the Life science content, and (3) generating an artefact.

The student-generated MCQs were evaluated firstly using Bloom's taxonomy. In this analysis, we established the level of knowledge that is required for someone who is attempting the question. In the second pass of evaluations, we conducted an analysis of the demands on our student question-setters when they designed MCQs, based on competencies that the question-setters required to construct their MCQs. This places the focus of our analysis back onto the question-setter's abilities instead of the MCQs. The analysis is done on the activity of student-authoring of MCQs in and of itself to better understand how the activity promoted engagement by examining the competencies students used to design the MCQs.

Theoretical background

Student-generated questions

Within educational institutions, there are different forms and levels of student engagement in learning activities that are dependent on the types of learning activities, the content, the students' age, and abilities. According to Finn and Zimmer ([15]), the different forms of student engagement are defined differently. While all forms of engagement are important, within the context of this work, we are interested in fostering both academic and cognitive engagement as the baseline for our students in a large-class setting. We use the idea of academic engagement as completing assignments in class and at home, while cognitive engagement as meaning the "expenditure of thoughtful energy needed to comprehend complex ideas in order to go beyond the minimal requirements" (Finn & Zimmer, [15]). As cognitive engagement can be nurtured (Fredricks, Blumenfeld, & Paris, [17]), it is important for us to improve our instructional design by incorporating relevant activities to promote it.

Among the different instructional strategies, active-learning in class is one way to increase student engagement (Allen & Tanner, [3]). Meta-analyses of work on active-learning revealed that as compared to passive absorption of knowledge, some form of active-learning that puts students at the centre of knowledge assimilation and production (Allen & Tanner, [3]), is more likely able to promote better performances and learning (e.g. Armbruster, Patel, Johnson, & Weiss, [4]; Freeman et al., [18]; Michael, [28]). We conceptualised "active-learning" as a form of generative-learning that requires a student to "do something" with the information or knowledge they have (Osborne & Wittrock, [30]). Presumably, the manipulation of information by students helps them develop better conceptual understanding, as they need to apply what they have learned.

Different strategies of generative-learning have been proposed, including but not limited to summarising, self-testing, self-explaining, and teaching (Fiorella & Mayer, [16]). For instance, a group supported the learning of the human heart system by getting students to highlight texts and generate summaries (Lee, Lim, & Grabowski, [24]). In another example of generative-learning, students were asked to generate concept-maps as a form of self-learning (Morse & Jutras, [29]). In both studies, the investigators noted that the use of some form of generative-learning strategies improved student performance.

The use of student-generated questions has been shown previously to be useful also for engaging students. For instance, a study in large-class introductory biology courses where students were encouraged to generate questions revealed that the activity helped students learn by engaging them in constructing meaningful questions (Colbert, Olson, & Clough, [10]). In another report on a Biomedical course, student-generated multiple-choice questions (MCQs) using an online platform known as Peerwise not only engaged students who were eager to author good quality questions, the activity also improved student performance (Bottomley & Denny, [8]). A separate study on students taking Genetics revealed that students who participated in the question-authoring activity fared better in academic assessments of different modes (McQueen, Shields, Finnegan, Higham, & Simmen, [27]). In yet another study on nursing students, the researchers found that writing MCQs helped students to learn bioscience topics that were challenging, as they needed to reflect upon their understanding in order to author MCQs (Craft, Christensen, Shaw, & Bakon, [11]). The deeper understanding that the nursing students reported could be an indication of active engagement with the course materials in order to design MCQs (Craft et al., [11]). Bottomley and Denny ([8]) proposed that the use of question-generation as an activity promoted "deep approach" to learning (Biggs, [7]). Moreover, Chin and Brown ([9]) noted that student-generating questions supported self-directed learning, which is aligned with increasing student engagement with the course. Indeed, Keys ([23], p. 119) proposed that "Writing in scientific genres promotes the production of new knowledge by creating a unique reflective environment for learners engaged in scientific investigations" so as to "take personal ownership of their own scientific ideas ...".

Scaffolding student authoring of MCQs

Students who have had experiences answering MCQs not necessarily have been exposed to Bloom's taxonomy. As such, in order for students to author MCQs, they needed to be guided to understand the levels of Bloom's taxonomy. Indeed, based on Vygotsky's idea of the zone of proximal development (Vygotsky, [35]), students mostly need to be guided by instructors familiar with MCQ design in order to write questions. In a technology-enabled environment, Lin and co-workers proposed the importance of social-cultural aspect of learning that involves different aspects of scaffolds including process prompts, displays and models (Lin, Hmelo, Kinzer, & Secules, [25]). For instance, at a basic level, process prompts can aid students by making clear the structure of an MCQ such as the stem and options, while process models (this refers to exemplars of how good MCQs can be crafted) can be examples provided by instructors on what good MCQs comprises.

A study by Yu ([39]) noted that use of various scaffolds can promote better student self-regulatory cognition and metacognition, though different types of scaffolds are valued by students differently. Looking across different platforms for student authoring of MCQs, one of the main scaffolds valued by students were process models including model exemplars of MCQs from peers. However, Yu highlighted that the requirements for scaffolds could be based on contexts and might require empirical investigations.

Evaluating students' MCQs

There are varied ways to evaluate the quality of MCQs. Some researchers analysed students' questions using criteria such as clarity of questions, errors in the questions and feasibility of distractors (e.g. Bottomley & Denny, [8]), student-generated MCQs have also been evaluated using or adapting Bloom's level that the questions target (e.g.Bates, Galloway, Riise, & Homer, [6]; Bottomley & Denny, [8]; McQueen et al., [27]). Typically, when an instructor designs an MCQ, he/she uses Bloom's taxonomy to design the question in terms of the level of student's cognition he/she wishes assess. For instance, the instructor could choose whether he/she wishes to target his/her students' abilities in terms of knowledge, application or analysis and so designs the MCQs appropriately (Crowe, Dirks, & Wenderoth, [12]). As such, Bloom's taxonomy has mostly been useful for instructors to pitch their MCQs to evaluate students' learning and the quality of MCQs linked to the levels that the questions target.

The evaluation of MCQs generated by students' using Bloom's taxonomy generally assumes that Bloom's level at which a student question-setter targeted his/her MCQ is a good measure of the cognitive demands on the student himself/herself when designing the question (Bottomley & Denny, [8]; McQueen et al., [27]). Namely, students' MCQs were assessed as to whether the questions tested at levels of "knowledge", "comprehension", "application", "analysis" and "evaluation", and these evaluations in turn were used as indications of the question-setter's own level of understanding of the subject matter. There is the underlying assumption that the MCQs that target higher Bloom's taxonomy require higher cognitive engagement, as the effort to generate more complex questions targeting Bloom's higher levels would require more "thoughtful energy" (Finn & Zimmer, [15]).

However, there is little agreement among studies as to whether students were able to compose MCQs testing higher order cognitive skills according to Bloom's taxonomy (Bates et al., [6]; Bottomley & Denny, [8]; McQueen et al., [27]), hence casting doubts as to whether the question-generating activity is in fact worthwhile. Moreover, it is also not clear if such an evaluation of MCQs sufficiently reflects the question-setter's knowledge and application of it. Assessing the MCQs constructed by students using Bloom's taxonomy only is hence insufficient since Bloom's taxonomy is used in question setting while good for assessing question answerers, it does not focus on the question-setters. To date, analysis of MCQs typically does not focus on cognitive engagement of the setter. This study aims to provide an additional evaluative lens to examine MCQs.

Aim of the study

In our study, we aimed to answer the following research questions:

  • Were students able to generate MCQs that target different levels of cognitive skills based on Bloom's taxonomy?
  • What were the competencies that the students showed when they design MCQs?
  • What was the alignment between evaluating MCQs targeting Bloom's taxonomy and the competencies required when constructing the MCQs?
  • What forms of engagement were exhibited when students designed MCQs?

The results from the four research questions provide evidence on the usefulness of the competency-based rubric for scaffolding students' authoring of MCQs. Further, the competency-based rubrics would also provide instructors with a more rounded perspective of students' abilities to apply Life Science concepts as they write the MCQs.

Methods

Target students and design of peerwise assignment

In our study, we aimed to increase student engagement using Peerwise as the mediating tool. The target students were second-year Life Sciences undergraduates taking an essential Cell Biology module (LSM2103) during the academic year 2014/2015. An assignment was designed that required students to generate one MCQ for each of the four topics of Cell Biology, namely, organelle biogenesis, protein trafficking, cell division and signaling.

The student-generated MCQs should each have a stem, one key and four distractors. The assignment spanned the 13-week semester and constituted 8% of the students' final scores. For the first–half of the semester, students had to submit two MCQs followed by another two for the second-half of the semester. Of this, 4% participation points were awarded for a total of four MCQs submitted, with 4% bonus points awarded for good MCQs.

With 324 students, a substantial bank of MCQs was available before the end of the semester. Students used this as a review tool before the summative assessment. Each student was awarded a maximum of 2% final points for answering more than 40 MCQs.

The criteria for evaluating each MCQ included writing in Standard English, relevance of the questions to the learning objectives (such as learning to apply content knowledge in a contextual manner), picking a correct key, providing three reasonable distractors out of four and writing a good explanation for their question and correct answer. The grading criteria for awarding bonus points to good student-generated MCQs (Appendix 1) were explained to students during one of the lectures before the assignment started. We scaffolded students' MCQ writing using Bloom's taxonomy (Crowe et al., [12]) as a guide by using examples of MCQs designed around the term "DNA" (Appendix 2). The example was highlighted during the lecture to students as a concrete example of how Bloom's taxonomy could be used to design MCQ based on the Bloom's level that a question-setter could to target.

Student recruitment and consent

Students' consent was sought before the end of the semester. They were provided with information as to the aims of the study. An email was sent to students to explain the research project and to recruit participants for the project by the administrative staff in our department. A short explanation of the project was made during one of the classes and students could ask questions related to the project. A hardcopy of the participant information sheet was provided to the students for detailed information. Students were asked if they would like to participate in the project through a consent form and allow the lecturer (first author) to examine their MCQs after the semester was over. They were informed that there were no repercussions whether they agree to participate or otherwise.

A physical copy of the consent form was given to each student during one of the classes when the project was explained to the students. They were given time to read through the participant information sheet and consent form and decide if they wish to participate in the study. The students who wanted to do so will be told to hand their forms to the administrative staff. She collected the forms from students at the end of the class and was available for students who to hand in the form to the Biochemistry Department office on other days. Ethical approval was sought from the author's affiliated university.

Data collection and analysis

Students constructed and uploaded their MCQs at the Peerwise site during the semester. After the semester, the students' MCQs were downloaded as pdfs. Among those who consented to allowing us to do further analysis, we randomly selected 40 on the topic of "Cell Division" for our study.

The unit of analysis was an individual MCQ. For the study, content analysis was used to examine students' MCQs. The various levels of Bloom's taxonomy as well as competency levels were the themes that were identified for our analysis. Bloom's taxonomy was the first analytical framework we used as one approach to classify the students' MCQs based on whether the questions targeted the different cognitive levels on the part of the individuals answering the questions. That is to say, if the MCQs actually tested other students in terms of the knowledge, comprehension, application or analysis needed to answer the questions. Based on previous suggestions (Crowe et al., [12]), MCQs could likely target all cognitive levels except synthesis. MCQs within the categories "knowledge" and "comprehension" were considered as lower-cognitive levels while those within the categories "application", "analysis" and "evaluation" are regarded as higher-cognitive levels. However, "application" can be taken as a transition from the lower-cognitive level to the higher-cognitive level and so there could be MCQs testing application that we classified as lower/higher cognition as posited by Crowe et al. ([12]). The analysis of the questions based on Bloom's taxonomy was conducted independently by two researchers and their analysis were subsequently compared and any differences were discussed until an agreement was reached.

For the second tier of analysis, we focused on students' efforts as they move from merely learning content to the application of it. This aligns with the novice-to-expert progression in skill acquisition previously proposed (Dreyfus, [13]). We deemed this to be relevant in an undergraduate Life Sciences degree programme and considered undergraduates to be an advanced beginner to approaching that of an expert in terms of being able to apply their knowledge in both context-free and situational settings (Dreyfus, [13]). We therefore derived a separate framework to evaluate the MCQs based on the idea of competence (Alexander, [2]) that a question designer required in order to be able to construct a particular MCQ. Competency was used as a measure of the level of "foundational body of domain knowledge", but also at levels that demonstrated "more cohesive and principled" use of the knowledge (Alexander, [2]). The assumption was that setting questions could require a relatively higher than basic level of competency even though the question might target at a low cognitive level according to Bloom's taxonomy. Three competency levels were derived based on the idea of a more integrated use of knowledge and its application. These levels are shown in Table 1.

Table 1. Descriptors of the three competency levels.

Competency level demonstrated by the question-setterDescriptors
1Foundational domain knowledge and ability to ask about details of cellular processes
2Includes level 1 competency and ability to ask questions requiring the application of specific cellular functions to normal and abnormal situations and prediction of outcomes, within a topic
3Includes level 2 competency and ability to ask questions requiring the application of specific cellular functions to normal and abnormal situations and predict outcomes, across different topics

The competency levels were based on the ideas of concept attainment and fluency in the learning of mathematics (Wu, [37]). We argue here that learning biological concepts falls along a continuum from knowing the factual knowledge to being able to critique, question and apply the knowledge in a different situation. Since facts and higher-order thinking lies along a continuum and not as discrete dichotomous entity (Wu, [37]), facts and higher-order thinking interact differently as learners increases in competences within a discipline. Based on this theoretical frame, the categories for the analytical framework to determine students' attainment of competences within the topic were described as shown in Table 1. The competency levels of the MCQs were subsequently coded by two raters (first and second authors). The frequency distributions based on Bloom's and coding for competencies were then compared between the raters. Discrepancies were discussed and consensus arrived at after re-analysis of the MCQs.

Anonymous survey

After the end of the semester, an online survey using Google Forms was conducted. Students were emailed the link to respond to questions about the MCQ authoring assignment. Questions included rating experiences on a Likert scale as well as open-ended questions.

Results

Students designed MQCs targeting a range of Bloom's levels

We used the Peerwise platform with process prompts that visibly displays the structure of an MCQ such as the stem and the options to help support students when writing MCQs. In addition, we also used Bloom's taxonomy as a framework for students to pitch their MCQs (Crowe et al., [12]), with examples included to illustrate how the design of the MCQs could change for a given topic by pitching at different Bloom's levels (Appendix 2). The first layer of evaluation of students' MCQs was examining if students were able to design questions of sufficiently high-quality using Bloom's taxonomy, given that more efforts and, hence cognitive engagement, might be needed by students for higher Bloom's levels.

We noted that students' MCQs targeted largely at the "comprehension" and "application" levels of Bloom's taxonomy (Figure 1). In terms of the cognitive levels, 47.5% of the questions targeted at the lower-cognitive level while 52.5% of the questions targeted at the lower/higher and higher cognitive levels. These numbers suggest that students were able to design questions that targeted higher cognitive levels.

PHOTO (COLOR): Figure 1. Frequency distribution of students' MCQs categorised using Bloom's taxonomy. Sample questions were analysed to determine the level of Bloom's that the questions were targeting. (n = 40).

Qualitatively, the types of MCQs that were constructed showed a range of "textbook" type questions as well as more "authentic" questions that required data analysis. MCQs judged to be at the "knowledge" level of Bloom's taxonomy where recall of facts was required (Crowe et al., [12]) typically were more textbook-style questions. These MCQs typically focused on straightforward definitions of terms or descriptions of cellular processes (Figure 2). For example, in one question (Figure 2(a)), the function of a specific enzyme complex during the process of cell division was asked in the MCQ. Moreover, the explanation of the answers provided by the student was not accurate. A similar question was designed about cell division, but a specific phase of cell division was targeted (Figure 2(b)). A statement was provided for each of the choices that served as an explanation, though the statements did not refer back to the questions directly. Generally, the answers to these textbook questions could be found in textbooks or in the lecture notes.

Graph: Figure 2. Examples of textbook-type MCQs. (a) A straightforward MCQ based on recall of the functions of different cyclin-CDK complexes. The explanation provided is not entirely accurate, which could be either due to a problem in understanding the concept of cyclin-CDK complexes in triggering mitosis or an issue with expression. (b) Another example of a textbook question on the DNA replication process. While the question looks seemingly complex, it mainly targeted knowledge. The explanation provided here was relatively more detailed, with a statement on each of the choice in the MCQ. (c) A direct question on mitosis that tested knowledge and comprehension.

In other types of textbook-style questions, "comprehension" was tested in MCQs that included relating various descriptions of processes or molecular functions to cell division. For instance, a question on determining a false event of mitosis was posed that needed understanding of how the different options are related to mitosis and whether the descriptions of the events are true (Figure 2(c)). The explanation for the question required the student to understand how activating and switching off a spindle checkpoint could affect the progression of the cell division cycle.

For authentic questions, it was not unusual to find MCQs incorporating a case-study or scenario or data from scientific articles (Figure 3). Here, an example can be seen where graphical data and Western blot data were presented as a case on which a question was based. Such questions typically involved students attempting the questions to evaluate the data and hence would typically require higher-cognitive skills than the textbook-style questions. The explanations provided by the question-setter also indicated a higher level of thinking skill is needed to arrive at the correct answers.

Graph: Figure 3. Example of an authentic question targeting Bloom's analysis. The question was designed data taken from a research article and targeted data analysis in addition to content knowledge. The key ideas tested in the question included the DNA damage checkpoint and ionising irradiation, functions of specific components of the checkpoint and post-translational modifications. Technical expertise required to answer the question include Western blot analysis and ionising radiation and drug treatments of cell. The explanatory notes provided included links across topics, as well as interpretation of data (Ahn and Prives, [1]).

Analysis of students' MCQs by competencies made explicit students' efforts at authoring MCQs

The same MCQs were then analysed separately using the competency framework to gauge the knowledge and skills required of the question-setter to design the questions. The premise here is that students needed to be equipped with competencies in the subject matter in order to construct questions at different levels of difficulty and complexity. As such, we felt that assessing the level of Bloom's taxonomy at which the MCQs targeted would underestimate the efforts that underpin students' abilities to make to design questions. Hence, our approach using this type of analysis was to distinguish between Bloom's levels at which an MCQ targets for someone to answer the question and the competencies required of the question-setter to design the question.

Based on the three levels of competencies, 45% of the questions required level 1 competency while 25% required level 2 and 30% at level 3 (Figure 4). In MCQs requiring level 1 competency, the main characteristic was judged as students using knowledge concerning basic cellular processes to construct the questions. For instance, questions requiring the question-setter to recall events that occur to allow progression through cell division (Figure 5(a)) or specific processes that are activated in a cell in the presence of DNA damage (Figure 5(b)) were considered as level 1 as they were fairly straightforward. Table 2 shows the detailed content analysis of the competency requirements by the question-setter in order to construct the question in Figure 5(b). Essentially, this represented the minimal standard of knowledge expected of students. As expected, the explanations provided for the correction options and distractors were relatively basic and were largely descriptive.

Table 2. Example of an analysis of an MQC for the requirements of various competency levels.

AnalysisCompetency level
Example question in Figure 5(b)
Student needed to be able to recall

relate the function of a checkpoint in activation to the presence of DNA damage.

recall the components affecting the DNA damage pathway such as p53 and MDM2, Chk1 and Chk2 and p21, and their relationships.

recall the functions of Mad2, Cdc20 and cyclin B-CDK1 in mitosis.

1
Example question in Figure 6(a)
Student needed to be able to

recognise DNA damage as causing problems to the cell division cycle.

describe processes related to DNA damage in G1 including roles of checkpoint components, effectors such as ATM, ATR, MDM, p53.

describe the roles of checkpoint targets such as 14-3-3 and Cdc25 – this required the student to relate Cdc25 localisation to 14-3-3 function. Also, this needed the student to relate Cdc25 localisation to the nuclear localisation signal, something taught in a different section earlier on in the semester.

2
Example question in Figure 6(d)
Student needed to be able to:

interpret the use of propidium iodide and its relationship with DNA.

interpret data from the flow cytometer technique – this requires some understanding of technical knowledge.

connect the use of BrdU to studying DNA replication – this requires some understanding of technical knowledge.

apply the knowledge of ionising radiation to DNA damage.

apply the knowledge of p53 in the DNA damage checkpoint function

discriminate progression through cell division in normal and DNA damage situation – this required linking knowledge of different phases of the cell division cycle.

apply the concept of alleles – this required use of knowledge of genetics and inheritance from prior knowledge.

3

PHOTO (COLOR): Figure 4. Frequency distribution of students' MCQs categorised using competencies. Sample questions were analysed to determine the competency levels that the question-setters needed to design the questions. (n = 40).

Graph: Figure 5. Examples of MCQs evaluated at competency level 1. (a) and (b) show examples from the sample of 40 MCQs that were judged to require competency level 1 to write. See also Table 2.

Graph: Figure 6. Examples of MCQs evaluated at competency levels 2 and 3. (a) and (b) show examples from the sample of 40 MCQs that were judged to require competency level 2 to design. See also Table 2. (c) and (d) show examples from the sample of 40 MCQs that were judged to require competency level 3 to construct. See also Table 2.

PHOTO (COLOR): Figure 6. (Continued).

For MCQs to be evaluated as requiring level 2-competency, students would have to demonstrate ability to apply concepts of specific cellular functions to different situations. For instance, the question-setter was able to set up a case-based MCQ in which the student designed scenarios in which disruption to the normal functioning certain cellular component or process took place and asked about the consequence of such occurrences (Figure 6(a)). What can be seen from the explanations of the options to such questions were that the students had to have deeper competencies in order to be able to articulate the reasons for the answers and why distractors were incorrect (Table 2). A slightly different form of level 2-competency was demonstrated by another question-setter who extended cellular concepts to diseased states. An example is shown in Figure 6(b), where the student linked mis-segregation of chromosomes to Down's syndrome.

As for students showing level 3 competencies, their MCQs were characterized by their ability to design scenarios that incorporated concepts from more than one topic taught. The implicit assumption was that they already possessed levels 1 and 2 proficiencies and had moved beyond those to application of concepts in a more holistic manner. For instance, one of the MCQ incorporated concepts from across topics that were taught by two different lecturers within the same module (Figure 6(c)).

In other cases, data from research articles were used in the MCQs (Figure 6(d)), in which students went to the extent of using experimental data from research articles. The use of data from primary research articles to construct MCQs was not directly taught in class, though the instructor had in her own questions, used such a strategy when designing quiz questions for students in formative assessments during her lessons. The sophisticated use of data from research articles that spanned across topics was not a trivial one, as that would imply the competencies to understand a range of experimental techniques for students in a module that is purely lecture-based (Table 2).

The research articles presented some level of difficulty for students as they have to be able to relate experimental procedures and methodologies to concepts learnt in class. The links between experimental research and concepts were oftentimes not found in textbooks as research papers normally describe contextualized problems that authors were trying to solve. It should be highlighted that the students themselves had selected the research articles that formed the bases of the MCQs, further supporting the notion that these students demonstrated a higher than levels 1 and 2 of competencies in the topics covered.

Labelling MCQs using Bloom's taxonomy belies the competencies needed by question-setters

We evaluated the same 40 MCQs separately using Bloom's taxonomy and competency levels so as not to introduce bias during our categorization using the two different criteria for analysis. We next made comparisons between the assessments of the questions using the two criteria, to determine if the different criteria could provide alternative perspectives as to the cognitive engagement of students when constructing questions. Each of the 10 questions highlighted in the figures has been evaluated both by Bloom's and competency levels and the classifications are shown in Table 3.

Table 3. Summary of analysis using both framework for the 10 questions highlighted in the figures.

ClassificationFigure number
Knowledge, Competency level 12A, 2B
Comprehension, Competency level 12C, 5A, 5B
Application, Competency level 26A, 6B
Application, Competency level 36C
Analysis, Competency level 33, 6D

As can be seen from the frequency distributions (Figure 7), questions targeting responders at Bloom's "knowledge" level were closely correlated with the lowest competency level needed by the question-setter. Using the example of the question in Figure 2(a), it can be seen that competency level 1 was needed to design a question that tested at the knowledge level. This was generally true of the other questions that fell within these two categories, indicating that constructing questions that target lower-order cognitive skills corresponded to the use of level 1 competency.

PHOTO (COLOR): Figure 7. Relative distribution of questions based on Bloom's versus competencies. The classifications of the same questions were evaluated using both frameworks for comparison. There is a tendency for MCQs targeting lower Bloom's to require lower competency levels to write (n = 40).

With regard to questions targeting responders at Bloom's "comprehension" level, there seem to be all MCQs that were judged at competency level 1 except one that was judged at competency level 2. The unique MCQ appeared to test at Bloom's "comprehension" of cell division. However, examination of the competencies needed to construct this question revealed that the MCQ needed the student to recall the functions of various enzymes and regulators of cell division, but also to extend their roles to checkpoint functions set in a simple scenario of an abnormal cell with cell size and damaged DNA. The competencies needed to bring together ideas in a scenario required competencies to assimilate and use specific concepts and express them in the opposite scenario to what was taught in class, and could have been obscured by categorising the question as a "comprehension" question based on the level at which it was targeting. This might lead to an under-estimation of students' efforts at constructing questions.

"Application"-type MCQs based on Bloom's taxonomy can be differentiated into questions that tested responders' low-order or high-order cognitive skills. Among these questions, we noted that 5 and 7 out of 13 MCQs required use of level 2 and 3 competencies respectively. For example, the question in Figure 5(a) that needed a competency level of 2 to design, seemingly targeted application of knowledge, which is a lower/higher-order question (Crowe et al., [12]). However, the competencies involved in crafting the question needed combining the concepts of DNA damage at a specific point during cell division and cell cycle arrest, sub-cellular localisation of cell division regulators, checkpoint components and their specific functions on the regulators as well as protein degradation by ubiquitination. Other "Application"-type MCQs crafted demanded level 3 competencies from the question-setters. The remaining MCQs targeting at Bloom's "analysis" mostly needed students to use levels 2 and 3 competencies.

Student engagement while constructing MCQs

In the anonymous survey we conducted after the semester, 16.7% of the class responded (n = 54). About 78% of the respondents at least agreed that they got familiar with the lecture materials due to the MCQ authoring assignment (Figure 8). Also, about 77% of the respondents agreed that they reflected and thought about the topics discussed in the lectures when constructing the MCQs. As to whether they read outside of lecture materials when they were designing MCQs, about 56% of them agreed that they did so. Interestingly, about 60% of the respondents indicated that bonus marks motivated them when they were doing the assignment. This suggested that the MCQ-authoring assignment had engaged students with the materials in the module, and that the activity made explicit the use of different competencies by the students over the semester. The positive responses on reflection somewhat mirrored a recent study on nursing students tasked to write MCQs (Craft et al., [11]).

PHOTO (COLOR): Figure 8. Survey on students' self-perception of engagement with module. Data show responses (n = 54) to Likert-scale questions on different aspects of participating in the question-authoring assignment.

In 12 free-text responses we obtained, there were five comments from students who noted that there were a number of questions that were very easy. Also, there were errors in several of them and students wanted incorrect questions filtered out. One comment was on awarding bonus marks more generously for those who referred to research articles and another on increasing the weighting of the assignment to encourage students to make more questions of better quality. Three other comments reflected the relatively positive responses of the Likert scale questions in terms of engagement with course materials while designing MCQs (Figure 8).

In addition to the survey responses, we also examined the actual participation of the students in the assignment. At least 90% of students scored full participation marks for authoring MCQs for the assignments. From the samples of MCQs, the students had to paraphrase the contents in the questions as well as in the explanations for the answers (Figure 2). This suggests that the basic question design activity was effective in engaging students even when constructing questions that minimally required level 1 competencies. About 10.8% and 16.7% of students were awarded the bonus marks for the MCQs submitted in the first and second half of the semester respectively. This was even though the marking was a low-risk format that awarded only four possible bonus marks for any two good MCQs designed by each student. There were efforts among students who designed authentic questions that won bonus marks included those using data from research articles. The fact that students read research articles they found on their own indicated a certain level of motivation for a very small amount of marks. However, not all the questions were given bonus marks as not all questions were of sufficient quality, even though there was cognitive engagement.

Discussion

Student-generated questions have been found to be improve academic performance (e.g. Chin & Brown, 2002; Hardy et al., [20]; McQueen et al., [27]). This improvement could be linked to the idea of writing-to-learn (Halliday, [19]; Keys, [23]), where students are engaged in "knowledge-telling" (Scardamalia, Bereiter, & Steinbach, [34]), a generative activity (Osborne & Wittrock, [30]) that supported deep-learning (Marton & Säljö, [26]). It was previously suggested that writing in the scientific genre promotes stimulation of students' thinking (Keys, [23]), which is consistent with findings in other studies involving different writing-to-learning strategies (e.g. Balgopal & Wallace, [5]; Kelly & Takao, [22]; Quitadamo & Kurtz, [32]). In previous studies, students' performance in academic tasks were taken as indirect measures of student engagement with course materials (Bottomley & Denny, [8]; McQueen et al., [27]).

In our current analysis, links to students' performance in the module were not made, as there are other activities within and outside the module that could have confounded such an analysis students' performance. Rather, our initial analysis using Bloom's taxonomy to examine whether students were able to design questions targeting at various Bloom's levels indicated that students were able to design a range of MCQs and that this was not different from previous studies (Bottomley & Denny, [8]; McQueen et al., [27]). Using Bloom's taxonomy for evaluation provided us some evidence that different students were able to design MCQs at different levels.

We further performed our analyses on the demands on the question-setters instead of the level of questions they were targeting when designing questions. This was to make more explicit how students were able to apply what they learned in the lectures and course materials and use them in designing MCQs. Indeed, our framework that examined the different competency levels ranging from foundation knowledge to more coherent use of knowledge (Alexander, [2]) allowed us to better understand the levels of effort students made in writing the MCQs. For instance, even though slightly over 50% of the MCQs required only competency level 1 to write, it nonetheless required students to familiarise themselves with the course materials, even if at the fundamental level. Moreover, we noted an example of an MCQ where Bloom's taxonomy level that the question targeted at might sometime underestimate the competence level required of students to author the questions (see Figure 6(c)). Using of a competency-level evaluation rubric might support the use of student-generated MCQs for assessing students' learning attainment.

We would further suggest that there was cognitive engagement (Fredricks et al., [17]), which was evident from the types of questions we have highlighted that included data from research articles (Figure 3). Although slightly less than 50% of the sample questions required competency level 2 or 3, we like to suggest that this might well be students' first attempt at designing an MCQ and hence, might not know how to design a question using competency levels, since designing MCQs was not something taught to students directly. This is akin to teachers having high level of content knowledge and competency but choosing to target lower-level Bloom's in their MCQs that demanded only level 1 competency. As such, designing a lower level Bloom's question or utilising level 1 competency is not necessarily a direct indication of a student's grasp of the content or competencies. We suggest that having an additional competency framework to examine students' MCQs could provide a more systematic oversight of students' engagement. In addition, the rubric for the competency levels could serve as an alternative process scaffold (Lin et al., [25]) instead of Bloom's taxonomy for students to use when authoring MCQs.

For our in-depth analysis, we had only sampled 40 of the question bank with more than 1200 questions contributed by the students. As was also noted by five of the survey respondents, there were quality issues with a population of the MCQs that students have designed when we were grading the questions. This is not unlike the observation made by a previous study (Bottomley & Denny, [8]), where the types of questions generated by students could be on the lower scale of Bloom's taxonomy or that there were errors in the questions (e.g. Figure 2(a)). Hence, although some students were able to construct MCQs targeting various Bloom's levels and utilising different competency levels, better scaffolds could be provided to students to enable more to utilise higher competencies during question construction (Yu, [39]). For instance, questions that were not clear or contained errors could benefit from scaffolding both in terms of content as well as question construction. Also, the inaccurate explanations provided by the students further demonstrated that more guidance and feedback could be given, both at the level of understanding the concepts and the application of them in the MCQ writing and explanations. Such scaffolding could be provided by the instructor, though given a large-class size, peer-support could also be useful (Yu, [39]), as we have seen students commenting on questions written by their classmates. This is an area that we would explore further in subsequent semesters.

It should be noted that for a large class, there could be an overwhelming number of questions such as in our case. Care should be taken in the assignment design such that each student submits a limited number of questions. Although we observed that students commented on one another's questions occasionally, the number of questions should ideally be something that the teaching team could evaluate in a timely manner. In our module, although the instructor did go through all the questions, it took a long time. Nonetheless, the fact that students were able to gauge the questions as easy or spot errors nonetheless indicated that such a question-generating activity engaged students at a level that had not been obvious previously when didactic lectures and closed-book summative assessments were used.

There are other possible writing-to-learn assignments such as getting students to write essays, blogging or reports that would also foster engagement (e.g. Balgopal & Wallace, [5]; Ellis, [14]; Kelly, Chen, & Prothero, [21]; Quitadamo & Kurtz, [32]; Xie, Ke, & Sharma, [38]). In our case, getting students to author questions as opposed to doing other types of writing assignments included having students to consider the different distractors that should be close to the answer. This itself would require higher-order cognitive skills such as analysis and evaluation. Moreover, students needed to write explanations for their questions. These are aspects of the student question-authoring assignment that provides comprehension-fostering and comprehension-monitoring activities (Palincsar & Brown, [31]) for both the students and teacher.

Given the technologies available to turn passive learning to active and self-regulated learning in- and out-of-classrooms (Säljö, [33]), it is no longer acceptable to view student learning as merely acquisition of information passively from instructors and subsequently giving it back during summative assessments. Instead, learning should be based on synthesis of something new, interesting or consequential based on students' prior knowledge. The use of a guided generative activity mediated by an online platform such as Peerwise could support student learning by way of increasing their engagement with the course materials.

In our study, we examined if the use of a student question-authoring assignment would improve student engagement with course materials. From our data, we noted that the writing of MCQs, even those needing level 1 competency, fostered student engagement with the course materials to acquaint themselves with the concepts necessary to write MCQs. Overall, judging from students' participation scores, we concluded that was academic engagement with the assignment and cognitive engagement among several students. Indeed, survey respondents' perception that the MCQ-authoring activity engaged them with the materials taught in class supported this notion. This is important for the instructor, who had some vague notions from student feedback that engagement with course materials was low throughout the semester. For instance, from previous end-of-the semester formal student feedback surveys, students complained that they had to memorise materials and they usually did that only prior to assessments. Hence, the move from using mostly closed-book summative assessment once in the middle of the semester and once at the end of the semester to assess student learning to student question-generating assignments that spanned the semester created opportunities for students to apply knowledge and concepts covered in lectures.

Acknowledgments

We are grateful to Paul Denny, University of Auckland, for the use of Peerwise.

Disclosure statement

No potential conflict of interest was reported by the authors.

Appendix 1

The assignment marking rubric shared with students including the basic description of what constitutes a good MCQ and topics around which they were expected to design their MCQs.

Graph

Appendix 2

The scaffolding provided to students at the start of the assignment in the form of examples of MCQs targeting at different Bloom's levels.

Graph

References 1 Ahn, J., & Prives, C. (2002). Checkpoint kinase 2 (Chk2) monomers or dimers phosphorylate Cdc25C after DNA damage regardless of threonine 68 phosphorylation. Journal of Biological Chemistry, 277 (50), 48418 –48426. 2 Alexander, P. A. (2003). The development of expertise : The journey from acclimation to proficiency. Educational Researcher, 32 (8), 10 – 14. 3 Allen, D., & Tanner, K. D. (2005). Feature infusing active learning into the large-enrollment biology class: Seven strategies, from the simple to complex. CBE-Life Sciences Education, 4, 262 – 268. 4 Armbruster, P., Patel, M., Johnson, E., & Weiss, M. (2009). Active learning and student-centered pedagogy improve student attitudes and performance in introductory biology. CBE—Life Sciences Education, 8 (3), 203 –213. 5 Balgopal, M. M., & Wallace, A. M. (2009). Decisions and dilemmas: Using writing to learn activities to increase ecological literacy. The Journal of Environmental Education, 40 (3), 13 – 26. 6 Bates, S. P., Galloway, R. K., Riise, J., & Homer, D. (2014). Assessing the quality of a student-generated question repository. Physical Review Special Topics - Physics Education Research, 10, 020105. 7 Biggs, J. (1999). What the student does: Teaching for enhanced learning. Higher Education Research & Development, 18 (February 2015), 57 – 75. 8 Bottomley, S., & Denny, P. (2011). A participatory learning approach to biochemistry using student authored and evaluated multiple-choice questions. Biochemistry and Molecular Biology Education, 39 (5), 352 – 361. 9 Chin, C., & Brown, D. E. (2002). Student-generated questions: A meaningful aspect of learning in science. International Journal of Science Education, 24 (5), 521 –549. Colbert, J. T., Olson, J. K., & Clough, M. P. (2007). Using the web to encourage student-generated questions in large-format introductory biology classes. CBE - Life Sciences Education, 6, 42 – 48. Craft, J. A., Christensen, M., Shaw, N., & Bakon, S. (2017). Nursing students collaborating to develop multiple-choice exam revision questions: A student engagement study. Nurse Education Today, 59 (February), 6 – 11. Crowe, A., Dirks, C., & Wenderoth, M. P. (2008). Biology in Bloom : Implementing Bloom's taxonomy to enhance student learning in biology. CBE Life Sciences Education, 7, 368 – 381. Dreyfus, S. E. (2004). The five-stage model of adult skill acquisition. Bulletin of Science, Technology & Society, 24, 177. Ellis, R. A. (2004). University student approaches to learning science through writing. International Journal of Science Education, 26 (15), 1835 – 1853. Finn, J. D., & Zimmer, K. S. (2012). Student engagement: What is it? Why does it matter. In S. L. Christenson (Ed.), Handbook of research on student engage (pp. 97 – 131). Boston, MA: Springer. doi: 10.1007/978-1-4614-2018-7 Fiorella, L., & Mayer, R. E. (2016). Eight ways to promote generative learning. Educational Psychology Review, 28, 717 – 741. Fredricks, J., Blumenfeld, P., & Paris, A. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74 (1), 59 – 109. Freeman, S., Eddy, S. L., Mcdonough, M., Smith, M. K., Okoroafor, N., Jordt, H., & Pat, M. (2014). Active learning increases student performance in science, engineering, and mathematics. Proceedings of the National Academy of Sciences, 111, 23. Halliday, M. (1993). Towards a language-based theory of learning. Linguistics and Education, 5, 93 – 116. Hardy, J., Bates, S. P., Casey, M. M., Galloway, K. W., Galloway, R. K., Kay, A. E., ... McQueen, H. A. (2014). Student-generated content: Enhancing learning through sharing multiple-choice questions. International Journal of Science Education, (February 2015), 1 – 15. doi: 10.1080/09500693.2014.916831 Kelly, G. J., Chen, C., & Prothero, W. (2000). The epistemological framing of a discipline: Writing science in university oceanography. Journal of Research in Science Teaching, 37 (7), 691 – 718. Kelly, G. J., & Takao, A. (2002). Epistemic levels in argument: An analysis of university oceanography students' use of evidence in writing. Science Education, 86 (3), 314 – 342. Keys, C. W. (1999). Revitalizing instruction in scientific genres : Connecting knowledge production with writing to learn in science. Science education, 83, 115 – 130. Lee, H. W., Lim, K. Y., & Grabowski, B. L. (2009). Generative learning strategies and metacognitive feedback to facilitate comprehension of complex science topics and self-regulation. Journal of Educational Multimedia and Hypermedia, 18 (1), 5 – 25. Lin, X., Hmelo, C., Kinzer, C. K., & Secules, T. J. (1999). Designing technology to support reflection. Educational Technology Research and Development, 47 (3), 43 – 62. Marton, B. Y. F., & Säljö, R. (1976). On qualitative differences in learning: I - outcome and process*. British Journal of Educational Psychology, 46, 4 – 11. McQueen, H. A., Shields, C., Finnegan, D. J., Higham, J., & Simmen, M. W. (2014). Peerwise provides significant academic benefits to biological science students across diverse learning tasks, but with minimal instructor intervention. Biochemistry and Molecular Biology Education, 42, 371 – 381. Michael, J. (2006). Where's the evidence that active learning works? Advances in Physiology Education, 30 (4), 159 – 167. Morse, D., & Jutras, F. (2008). Implementing concept-based learning in a large undergraduate classroom. CBE Life Sciences Education, 7, 243 – 253. Osborne, R. J., & Wittrock, M. C. (1983). Learning science: A generative process. Science Education, 67 (4), 489 – 508. Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension- fostering and comprehension- monitoring activities. Cognition and Instruction, 1 (2), 117 – 175. Quitadamo, I. J., & Kurtz, M. J. (2007). Learning to improve : Using writing to increase critical thinking performance in general education biology. CBE Life Sciences Education, 6, 140 – 154. Säljö, R. (2010). Digital tools and challenges to institutional traditions of learning: Technologies, social memory and the performative nature of learning. Journal of Computer Assisted Learning, 26 (1), 53 – 64. Scardamalia, M., Bereiter, C., & Steinbach, R. (1984). Teachability of reflective processes in written composition. Cognitive Science, 8 (2), 173 – 190. Vygotsky, L. (1980). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press. Wood, W. (2009). Innovations in teaching undergraduate biology and why we need them. Annual Review of Cell and Developmental Biology, 25, 93 – 112. Wu, H. (1999). Basic skills versus conceptual understanding. American Educator, 23 (3), 14 – 19. Xie, Y., Ke, F., & Sharma, P. (2008). The effect of peer feedback for blogging on college students' reflective learning processes. The Internet and Higher Education, 11 (1), 18 – 25. Yu, F. Y. (2009). Scaffolding student-generated questions: Design and development of a customizable online learning system. Computers in Human Behavior, 25 (5), 1129 – 1138.

By Foong May Yeong; Cheen Fei Chin and Aik Ling Tan

Reported by Author; Author; Author

Foong May Yeong is an Associate Professor at the Department of Biochemistry, National University of Singapore. She is a Fellow of the NUS Teaching Academy and Core member of ALSET at NUS. She is a yeast cell biologist with an interest in biology education in the higher-education context. Her interests in education research revolves around approaches to improve student engagement in and out of classes, and development of broad-based competencies for biology undergraduates.

Cheen Fei Chin is a Postdoctoral Research Fellow at Department of Biochemistry, National University of Singapore. His research interests include student engagement and formative assessment for large undergraduate classes.

Aik Ling Tan is an Associate Professor at the Natural Sciences and Science Education academic group at the National Institute of Education, Nanyang Technological University, Singapore. She is currently the Deputy Head for Teaching and Curriculum matters. Her research examines classroom interactions and emotions in science learning through studying talk.

Titel:
Use of a Competency Framework to Explore the Benefits of Student-Generated Multiple-Choice Questions (MCQs) on Student Engagement
Autor/in / Beteiligte Person: Yeong, Foong May ; Chin, Cheen Fei ; Tan, Aik Ling
Link:
Zeitschrift: Pedagogies: An International Journal, Jg. 15 (2020), Heft 2, S. 83-105
Veröffentlichung: 2020
Medientyp: academicJournal
ISSN: 1554-480X (print)
DOI: 10.1080/1554480X.2019.1684924
Schlagwort:
  • Descriptors: Learner Engagement Multiple Choice Tests Class Size Biological Sciences College Science Undergraduate Students Student Developed Materials Test Items Test Construction Competence Thinking Skills Knowledge Level Student Attitudes
Sonstiges:
  • Nachgewiesen in: ERIC
  • Sprachen: English
  • Language: English
  • Peer Reviewed: Y
  • Page Count: 23
  • Document Type: Journal Articles ; Reports - Research
  • Education Level: Higher Education ; Postsecondary Education
  • Abstractor: As Provided
  • Entry Date: 2020

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -