Student engagement in large Life Sciences classes can be problematic, especially with the course work done outside formal class contact hours. To enhance student engagement with the content outside class time, we designed an assignment spanning one semester that required students to author MCQs. We used Bloom's taxonomy to evaluate the MCQs. Additionally, we derived a three-level framework to analyse the demands on the student question-setters by determining the competencies required to construct the MCQs. This two tier analysis of MCQs allowed us to gauge the level of student engagement with course materials. The three-level competency framework referred to students' foundational domain knowledge at level 1 to application and prediction of cellular functions in normal and abnormal situations, within a topic at level 2 and across different topics at level 3. From 40 sample MCQs, slightly over 50% targeted mid- to high-level Bloom's taxonomy. Slightly under 50% of the questions required attainment of level 2 and 3 competencies for construction. However, we noted a high level of academic engagement and some level of cognitive engagement among several students which are consistent with self-reports in an anonymous student survey conducted after the semester. We suggest that using a competency framework to analyse student-authored MCQs can make explicit students' efforts at constructing MCQs.
Keywords: Undergraduate; competency levels; student engagement; student-generated questions
Large-class teaching of biology at the undergraduate level is in need of more active-learning approaches other than relying on purely didactic teaching in lecture theatres. Didactic teaching, though efficient at disseminating information is largely passive and can possibly lead to lower engagement of students in learning (Wood, [
In our large-class Life Sciences undergraduates in a Cell Biology module, we designed a generative-learning assignment, involving student authoring MCQs as an activity to extend students' engagement outside formal curricular time. The assignment of authoring MCQs required students to be actively thinking about the content and applying them to construct the questions as well as the distractors. Authoring MCQs as an activity showed the components of generative learning in (
The student-generated MCQs were evaluated firstly using Bloom's taxonomy. In this analysis, we established the level of knowledge that is required for someone who is attempting the question. In the second pass of evaluations, we conducted an analysis of the demands on our student question-setters when they designed MCQs, based on competencies that the question-setters required to construct their MCQs. This places the focus of our analysis back onto the question-setter's abilities instead of the MCQs. The analysis is done on the activity of student-authoring of MCQs in and of itself to better understand how the activity promoted engagement by examining the competencies students used to design the MCQs.
Within educational institutions, there are different forms and levels of student engagement in learning activities that are dependent on the types of learning activities, the content, the students' age, and abilities. According to Finn and Zimmer ([
Among the different instructional strategies, active-learning in class is one way to increase student engagement (Allen & Tanner, [
Different strategies of generative-learning have been proposed, including but not limited to summarising, self-testing, self-explaining, and teaching (Fiorella & Mayer, [
The use of student-generated questions has been shown previously to be useful also for engaging students. For instance, a study in large-class introductory biology courses where students were encouraged to generate questions revealed that the activity helped students learn by engaging them in constructing meaningful questions (Colbert, Olson, & Clough, [
Students who have had experiences answering MCQs not necessarily have been exposed to Bloom's taxonomy. As such, in order for students to author MCQs, they needed to be guided to understand the levels of Bloom's taxonomy. Indeed, based on Vygotsky's idea of the zone of proximal development (Vygotsky, [
A study by Yu ([
There are varied ways to evaluate the quality of MCQs. Some researchers analysed students' questions using criteria such as clarity of questions, errors in the questions and feasibility of distractors (e.g. Bottomley & Denny, [
The evaluation of MCQs generated by students' using Bloom's taxonomy generally assumes that Bloom's level at which a student question-setter targeted his/her MCQ is a good measure of the cognitive demands on the student himself/herself when designing the question (Bottomley & Denny, [
However, there is little agreement among studies as to whether students were able to compose MCQs testing higher order cognitive skills according to Bloom's taxonomy (Bates et al., [
In our study, we aimed to answer the following research questions:
- Were students able to generate MCQs that target different levels of cognitive skills based on Bloom's taxonomy?
- What were the competencies that the students showed when they design MCQs?
- What was the alignment between evaluating MCQs targeting Bloom's taxonomy and the competencies required when constructing the MCQs?
- What forms of engagement were exhibited when students designed MCQs?
The results from the four research questions provide evidence on the usefulness of the competency-based rubric for scaffolding students' authoring of MCQs. Further, the competency-based rubrics would also provide instructors with a more rounded perspective of students' abilities to apply Life Science concepts as they write the MCQs.
In our study, we aimed to increase student engagement using Peerwise as the mediating tool. The target students were second-year Life Sciences undergraduates taking an essential Cell Biology module (LSM2103) during the academic year 2014/2015. An assignment was designed that required students to generate one MCQ for each of the four topics of Cell Biology, namely, organelle biogenesis, protein trafficking, cell division and signaling.
The student-generated MCQs should each have a stem, one key and four distractors. The assignment spanned the 13-week semester and constituted 8% of the students' final scores. For the first–half of the semester, students had to submit two MCQs followed by another two for the second-half of the semester. Of this, 4% participation points were awarded for a total of four MCQs submitted, with 4% bonus points awarded for good MCQs.
With 324 students, a substantial bank of MCQs was available before the end of the semester. Students used this as a review tool before the summative assessment. Each student was awarded a maximum of 2% final points for answering more than 40 MCQs.
The criteria for evaluating each MCQ included writing in Standard English, relevance of the questions to the learning objectives (such as learning to apply content knowledge in a contextual manner), picking a correct key, providing three reasonable distractors out of four and writing a good explanation for their question and correct answer. The grading criteria for awarding bonus points to good student-generated MCQs (Appendix 1) were explained to students during one of the lectures before the assignment started. We scaffolded students' MCQ writing using Bloom's taxonomy (Crowe et al., [
Students' consent was sought before the end of the semester. They were provided with information as to the aims of the study. An email was sent to students to explain the research project and to recruit participants for the project by the administrative staff in our department. A short explanation of the project was made during one of the classes and students could ask questions related to the project. A hardcopy of the participant information sheet was provided to the students for detailed information. Students were asked if they would like to participate in the project through a consent form and allow the lecturer (first author) to examine their MCQs after the semester was over. They were informed that there were no repercussions whether they agree to participate or otherwise.
A physical copy of the consent form was given to each student during one of the classes when the project was explained to the students. They were given time to read through the participant information sheet and consent form and decide if they wish to participate in the study. The students who wanted to do so will be told to hand their forms to the administrative staff. She collected the forms from students at the end of the class and was available for students who to hand in the form to the Biochemistry Department office on other days. Ethical approval was sought from the author's affiliated university.
Students constructed and uploaded their MCQs at the Peerwise site during the semester. After the semester, the students' MCQs were downloaded as pdfs. Among those who consented to allowing us to do further analysis, we randomly selected 40 on the topic of "Cell Division" for our study.
The unit of analysis was an individual MCQ. For the study, content analysis was used to examine students' MCQs. The various levels of Bloom's taxonomy as well as competency levels were the themes that were identified for our analysis. Bloom's taxonomy was the first analytical framework we used as one approach to classify the students' MCQs based on whether the questions targeted the different cognitive levels on the part of the individuals answering the questions. That is to say, if the MCQs actually tested other students in terms of the knowledge, comprehension, application or analysis needed to answer the questions. Based on previous suggestions (Crowe et al., [
For the second tier of analysis, we focused on students' efforts as they move from merely learning content to the application of it. This aligns with the novice-to-expert progression in skill acquisition previously proposed (Dreyfus, [
Table 1. Descriptors of the three competency levels.
Competency level demonstrated by the question-setter Descriptors 1 Foundational domain knowledge and ability to ask about details of cellular processes 2 Includes level 1 competency and ability to ask questions requiring the application of specific cellular functions to normal and abnormal situations and prediction of outcomes, within a topic 3 Includes level 2 competency and ability to ask questions requiring the application of specific cellular functions to normal and abnormal situations and predict outcomes, across different topics
The competency levels were based on the ideas of concept attainment and fluency in the learning of mathematics (Wu, [
After the end of the semester, an online survey using Google Forms was conducted. Students were emailed the link to respond to questions about the MCQ authoring assignment. Questions included rating experiences on a Likert scale as well as open-ended questions.
We used the Peerwise platform with process prompts that visibly displays the structure of an MCQ such as the stem and the options to help support students when writing MCQs. In addition, we also used Bloom's taxonomy as a framework for students to pitch their MCQs (Crowe et al., [
We noted that students' MCQs targeted largely at the "comprehension" and "application" levels of Bloom's taxonomy (Figure 1). In terms of the cognitive levels, 47.5% of the questions targeted at the lower-cognitive level while 52.5% of the questions targeted at the lower/higher and higher cognitive levels. These numbers suggest that students were able to design questions that targeted higher cognitive levels.
PHOTO (COLOR): Figure 1. Frequency distribution of students' MCQs categorised using Bloom's taxonomy. Sample questions were analysed to determine the level of Bloom's that the questions were targeting. (n = 40).
Qualitatively, the types of MCQs that were constructed showed a range of "textbook" type questions as well as more "authentic" questions that required data analysis. MCQs judged to be at the "knowledge" level of Bloom's taxonomy where recall of facts was required (Crowe et al., [
Graph: Figure 2. Examples of textbook-type MCQs. (a) A straightforward MCQ based on recall of the functions of different cyclin-CDK complexes. The explanation provided is not entirely accurate, which could be either due to a problem in understanding the concept of cyclin-CDK complexes in triggering mitosis or an issue with expression. (b) Another example of a textbook question on the DNA replication process. While the question looks seemingly complex, it mainly targeted knowledge. The explanation provided here was relatively more detailed, with a statement on each of the choice in the MCQ. (c) A direct question on mitosis that tested knowledge and comprehension.
In other types of textbook-style questions, "comprehension" was tested in MCQs that included relating various descriptions of processes or molecular functions to cell division. For instance, a question on determining a false event of mitosis was posed that needed understanding of how the different options are related to mitosis and whether the descriptions of the events are true (Figure 2(c)). The explanation for the question required the student to understand how activating and switching off a spindle checkpoint could affect the progression of the cell division cycle.
For authentic questions, it was not unusual to find MCQs incorporating a case-study or scenario or data from scientific articles (Figure 3). Here, an example can be seen where graphical data and Western blot data were presented as a case on which a question was based. Such questions typically involved students attempting the questions to evaluate the data and hence would typically require higher-cognitive skills than the textbook-style questions. The explanations provided by the question-setter also indicated a higher level of thinking skill is needed to arrive at the correct answers.
Graph: Figure 3. Example of an authentic question targeting Bloom's analysis. The question was designed data taken from a research article and targeted data analysis in addition to content knowledge. The key ideas tested in the question included the DNA damage checkpoint and ionising irradiation, functions of specific components of the checkpoint and post-translational modifications. Technical expertise required to answer the question include Western blot analysis and ionising radiation and drug treatments of cell. The explanatory notes provided included links across topics, as well as interpretation of data (Ahn and Prives, [
The same MCQs were then analysed separately using the competency framework to gauge the knowledge and skills required of the question-setter to design the questions. The premise here is that students needed to be equipped with competencies in the subject matter in order to construct questions at different levels of difficulty and complexity. As such, we felt that assessing the level of Bloom's taxonomy at which the MCQs targeted would underestimate the efforts that underpin students' abilities to make to design questions. Hence, our approach using this type of analysis was to distinguish between Bloom's levels at which an MCQ targets for someone to answer the question and the competencies required of the question-setter to design the question.
Based on the three levels of competencies, 45% of the questions required level 1 competency while 25% required level 2 and 30% at level 3 (Figure 4). In MCQs requiring level 1 competency, the main characteristic was judged as students using knowledge concerning basic cellular processes to construct the questions. For instance, questions requiring the question-setter to recall events that occur to allow progression through cell division (Figure 5(a)) or specific processes that are activated in a cell in the presence of DNA damage (Figure 5(b)) were considered as level 1 as they were fairly straightforward. Table 2 shows the detailed content analysis of the competency requirements by the question-setter in order to construct the question in Figure 5(b). Essentially, this represented the minimal standard of knowledge expected of students. As expected, the explanations provided for the correction options and distractors were relatively basic and were largely descriptive.
Table 2. Example of an analysis of an MQC for the requirements of various competency levels.
relate the function of a checkpoint in activation to the presence of DNA damage. recall the components affecting the DNA damage pathway such as p53 and MDM2, Chk1 and Chk2 and p21, and their relationships. recall the functions of Mad2, Cdc20 and cyclin B-CDK1 in mitosis. recognise DNA damage as causing problems to the cell division cycle. describe processes related to DNA damage in G1 including roles of checkpoint components, effectors such as ATM, ATR, MDM, p53. describe the roles of checkpoint targets such as 14-3-3 and Cdc25 – this required the student to relate Cdc25 localisation to 14-3-3 function. Also, this needed the student to relate Cdc25 localisation to the nuclear localisation signal, something taught in a different section earlier on in the semester. interpret the use of propidium iodide and its relationship with DNA. interpret data from the flow cytometer technique – this requires some understanding of technical knowledge. connect the use of BrdU to studying DNA replication – this requires some understanding of technical knowledge. apply the knowledge of ionising radiation to DNA damage. apply the knowledge of p53 in the DNA damage checkpoint function discriminate progression through cell division in normal and DNA damage situation – this required linking knowledge of different phases of the cell division cycle. apply the concept of alleles – this required use of knowledge of genetics and inheritance from prior knowledge.Analysis Competency level Student needed to be able to recall 1 Student needed to be able to 2 Student needed to be able to: 3
PHOTO (COLOR): Figure 4. Frequency distribution of students' MCQs categorised using competencies. Sample questions were analysed to determine the competency levels that the question-setters needed to design the questions. (n = 40).
Graph: Figure 5. Examples of MCQs evaluated at competency level 1. (a) and (b) show examples from the sample of 40 MCQs that were judged to require competency level 1 to write. See also Table 2.
Graph: Figure 6. Examples of MCQs evaluated at competency levels 2 and 3. (a) and (b) show examples from the sample of 40 MCQs that were judged to require competency level 2 to design. See also Table 2. (c) and (d) show examples from the sample of 40 MCQs that were judged to require competency level 3 to construct. See also Table 2.
PHOTO (COLOR): Figure 6. (Continued).
For MCQs to be evaluated as requiring level 2-competency, students would have to demonstrate ability to apply concepts of specific cellular functions to different situations. For instance, the question-setter was able to set up a case-based MCQ in which the student designed scenarios in which disruption to the normal functioning certain cellular component or process took place and asked about the consequence of such occurrences (Figure 6(a)). What can be seen from the explanations of the options to such questions were that the students had to have deeper competencies in order to be able to articulate the reasons for the answers and why distractors were incorrect (Table 2). A slightly different form of level 2-competency was demonstrated by another question-setter who extended cellular concepts to diseased states. An example is shown in Figure 6(b), where the student linked mis-segregation of chromosomes to Down's syndrome.
As for students showing level 3 competencies, their MCQs were characterized by their ability to design scenarios that incorporated concepts from more than one topic taught. The implicit assumption was that they already possessed levels 1 and 2 proficiencies and had moved beyond those to application of concepts in a more holistic manner. For instance, one of the MCQ incorporated concepts from across topics that were taught by two different lecturers within the same module (Figure 6(c)).
In other cases, data from research articles were used in the MCQs (Figure 6(d)), in which students went to the extent of using experimental data from research articles. The use of data from primary research articles to construct MCQs was not directly taught in class, though the instructor had in her own questions, used such a strategy when designing quiz questions for students in formative assessments during her lessons. The sophisticated use of data from research articles that spanned across topics was not a trivial one, as that would imply the competencies to understand a range of experimental techniques for students in a module that is purely lecture-based (Table 2).
The research articles presented some level of difficulty for students as they have to be able to relate experimental procedures and methodologies to concepts learnt in class. The links between experimental research and concepts were oftentimes not found in textbooks as research papers normally describe contextualized problems that authors were trying to solve. It should be highlighted that the students themselves had selected the research articles that formed the bases of the MCQs, further supporting the notion that these students demonstrated a higher than levels 1 and 2 of competencies in the topics covered.
We evaluated the same 40 MCQs separately using Bloom's taxonomy and competency levels so as not to introduce bias during our categorization using the two different criteria for analysis. We next made comparisons between the assessments of the questions using the two criteria, to determine if the different criteria could provide alternative perspectives as to the cognitive engagement of students when constructing questions. Each of the 10 questions highlighted in the figures has been evaluated both by Bloom's and competency levels and the classifications are shown in Table 3.
Table 3. Summary of analysis using both framework for the 10 questions highlighted in the figures.
Classification Figure number Knowledge, Competency level 1 2A, 2B Comprehension, Competency level 1 2C, 5A, 5B Application, Competency level 2 6A, 6B Application, Competency level 3 6C Analysis, Competency level 3 3, 6D
As can be seen from the frequency distributions (Figure 7), questions targeting responders at Bloom's "knowledge" level were closely correlated with the lowest competency level needed by the question-setter. Using the example of the question in Figure 2(a), it can be seen that competency level 1 was needed to design a question that tested at the knowledge level. This was generally true of the other questions that fell within these two categories, indicating that constructing questions that target lower-order cognitive skills corresponded to the use of level 1 competency.
PHOTO (COLOR): Figure 7. Relative distribution of questions based on Bloom's versus competencies. The classifications of the same questions were evaluated using both frameworks for comparison. There is a tendency for MCQs targeting lower Bloom's to require lower competency levels to write (n = 40).
With regard to questions targeting responders at Bloom's "comprehension" level, there seem to be all MCQs that were judged at competency level 1 except one that was judged at competency level 2. The unique MCQ appeared to test at Bloom's "comprehension" of cell division. However, examination of the competencies needed to construct this question revealed that the MCQ needed the student to recall the functions of various enzymes and regulators of cell division, but also to extend their roles to checkpoint functions set in a simple scenario of an abnormal cell with cell size and damaged DNA. The competencies needed to bring together ideas in a scenario required competencies to assimilate and use specific concepts and express them in the opposite scenario to what was taught in class, and could have been obscured by categorising the question as a "comprehension" question based on the level at which it was targeting. This might lead to an under-estimation of students' efforts at constructing questions.
"Application"-type MCQs based on Bloom's taxonomy can be differentiated into questions that tested responders' low-order or high-order cognitive skills. Among these questions, we noted that 5 and 7 out of 13 MCQs required use of level 2 and 3 competencies respectively. For example, the question in Figure 5(a) that needed a competency level of 2 to design, seemingly targeted application of knowledge, which is a lower/higher-order question (Crowe et al., [
In the anonymous survey we conducted after the semester, 16.7% of the class responded (n = 54). About 78% of the respondents at least agreed that they got familiar with the lecture materials due to the MCQ authoring assignment (Figure 8). Also, about 77% of the respondents agreed that they reflected and thought about the topics discussed in the lectures when constructing the MCQs. As to whether they read outside of lecture materials when they were designing MCQs, about 56% of them agreed that they did so. Interestingly, about 60% of the respondents indicated that bonus marks motivated them when they were doing the assignment. This suggested that the MCQ-authoring assignment had engaged students with the materials in the module, and that the activity made explicit the use of different competencies by the students over the semester. The positive responses on reflection somewhat mirrored a recent study on nursing students tasked to write MCQs (Craft et al., [
PHOTO (COLOR): Figure 8. Survey on students' self-perception of engagement with module. Data show responses (n = 54) to Likert-scale questions on different aspects of participating in the question-authoring assignment.
In 12 free-text responses we obtained, there were five comments from students who noted that there were a number of questions that were very easy. Also, there were errors in several of them and students wanted incorrect questions filtered out. One comment was on awarding bonus marks more generously for those who referred to research articles and another on increasing the weighting of the assignment to encourage students to make more questions of better quality. Three other comments reflected the relatively positive responses of the Likert scale questions in terms of engagement with course materials while designing MCQs (Figure 8).
In addition to the survey responses, we also examined the actual participation of the students in the assignment. At least 90% of students scored full participation marks for authoring MCQs for the assignments. From the samples of MCQs, the students had to paraphrase the contents in the questions as well as in the explanations for the answers (Figure 2). This suggests that the basic question design activity was effective in engaging students even when constructing questions that minimally required level 1 competencies. About 10.8% and 16.7% of students were awarded the bonus marks for the MCQs submitted in the first and second half of the semester respectively. This was even though the marking was a low-risk format that awarded only four possible bonus marks for any two good MCQs designed by each student. There were efforts among students who designed authentic questions that won bonus marks included those using data from research articles. The fact that students read research articles they found on their own indicated a certain level of motivation for a very small amount of marks. However, not all the questions were given bonus marks as not all questions were of sufficient quality, even though there was cognitive engagement.
Student-generated questions have been found to be improve academic performance (e.g. Chin & Brown, 2002; Hardy et al., [
In our current analysis, links to students' performance in the module were not made, as there are other activities within and outside the module that could have confounded such an analysis students' performance. Rather, our initial analysis using Bloom's taxonomy to examine whether students were able to design questions targeting at various Bloom's levels indicated that students were able to design a range of MCQs and that this was not different from previous studies (Bottomley & Denny, [
We further performed our analyses on the demands on the question-setters instead of the level of questions they were targeting when designing questions. This was to make more explicit how students were able to apply what they learned in the lectures and course materials and use them in designing MCQs. Indeed, our framework that examined the different competency levels ranging from foundation knowledge to more coherent use of knowledge (Alexander, [
We would further suggest that there was cognitive engagement (Fredricks et al., [
For our in-depth analysis, we had only sampled 40 of the question bank with more than 1200 questions contributed by the students. As was also noted by five of the survey respondents, there were quality issues with a population of the MCQs that students have designed when we were grading the questions. This is not unlike the observation made by a previous study (Bottomley & Denny, [
It should be noted that for a large class, there could be an overwhelming number of questions such as in our case. Care should be taken in the assignment design such that each student submits a limited number of questions. Although we observed that students commented on one another's questions occasionally, the number of questions should ideally be something that the teaching team could evaluate in a timely manner. In our module, although the instructor did go through all the questions, it took a long time. Nonetheless, the fact that students were able to gauge the questions as easy or spot errors nonetheless indicated that such a question-generating activity engaged students at a level that had not been obvious previously when didactic lectures and closed-book summative assessments were used.
There are other possible writing-to-learn assignments such as getting students to write essays, blogging or reports that would also foster engagement (e.g. Balgopal & Wallace, [
Given the technologies available to turn passive learning to active and self-regulated learning in- and out-of-classrooms (Säljö, [
In our study, we examined if the use of a student question-authoring assignment would improve student engagement with course materials. From our data, we noted that the writing of MCQs, even those needing level 1 competency, fostered student engagement with the course materials to acquaint themselves with the concepts necessary to write MCQs. Overall, judging from students' participation scores, we concluded that was academic engagement with the assignment and cognitive engagement among several students. Indeed, survey respondents' perception that the MCQ-authoring activity engaged them with the materials taught in class supported this notion. This is important for the instructor, who had some vague notions from student feedback that engagement with course materials was low throughout the semester. For instance, from previous end-of-the semester formal student feedback surveys, students complained that they had to memorise materials and they usually did that only prior to assessments. Hence, the move from using mostly closed-book summative assessment once in the middle of the semester and once at the end of the semester to assess student learning to student question-generating assignments that spanned the semester created opportunities for students to apply knowledge and concepts covered in lectures.
We are grateful to Paul Denny, University of Auckland, for the use of Peerwise.
No potential conflict of interest was reported by the authors.
The assignment marking rubric shared with students including the basic description of what constitutes a good MCQ and topics around which they were expected to design their MCQs.
Graph
The scaffolding provided to students at the start of the assignment in the form of examples of MCQs targeting at different Bloom's levels.
Graph
By Foong May Yeong; Cheen Fei Chin and Aik Ling Tan
Reported by Author; Author; Author
Foong May Yeong is an Associate Professor at the Department of Biochemistry, National University of Singapore. She is a Fellow of the NUS Teaching Academy and Core member of ALSET at NUS. She is a yeast cell biologist with an interest in biology education in the higher-education context. Her interests in education research revolves around approaches to improve student engagement in and out of classes, and development of broad-based competencies for biology undergraduates.
Cheen Fei Chin is a Postdoctoral Research Fellow at Department of Biochemistry, National University of Singapore. His research interests include student engagement and formative assessment for large undergraduate classes.
Aik Ling Tan is an Associate Professor at the Natural Sciences and Science Education academic group at the National Institute of Education, Nanyang Technological University, Singapore. She is currently the Deputy Head for Teaching and Curriculum matters. Her research examines classroom interactions and emotions in science learning through studying talk.