Zum Hauptinhalt springen

Classroom Psychosocial Environment and Course Experiences in Pre-Service Teacher Education Courses at an Australian University

Dorman, Jeffrey P.
In: Studies in Higher Education, Jg. 39 (2014), Heft 1, S. 34-47
Online academicJournal

Classroom psychosocial environment and course experiences in pre-service teacher education courses at an Australian university. 

Research linking university students' perceptions of their classroom environment and course experiences was conducted in one Australian university. A sample of 495 students responded to the College and University Classroom Environment Inventory (CUCEI) and the Course Experience Questionnaire (CEQ). Multilevel regression analyses revealed that several CUCEI scales were significant predictors of CEQ scales. Overall, task orientation was the most potent predictor of all five CEQ scales: clear goals and standards, generic skills, good teaching, appropriate workload and appropriate assessment. Improvements in the classroom environment were linked to more positive course experiences which are being taken as indicators of institutional performance. It is recommended that more attention be paid to classroom environment in colleges and universities.

Keywords: classroom environment; Course Experience Questionnaire; pre-service courses

An issue that is gaining increased attention by university administrators and managers worldwide is how students perceive their experiences at university. While this could be due to a genuine interest in the well-being of students, it is much more likely that this is simply part of the accountability movement sweeping through universities, particularly in western countries. In Australia, this process is in full flight, with the performance of graduating students being quantified and used to compare universities and even departments within universities (see, for example, Department of Education, Science and Training [8]; Graduate Careers Council Australia [29]). Indeed, performance on the Course Experience Questionnaire (CEQ: Ramsden [53]; Wilson, Lizzio, and Ramsden [66]) – an instrument that is administered to all graduating students from all Australian universities is assuming great importance today to university administrators. Despite well-acknowledged conceptual and measurement limitations (see, for example, Wilson, Lizzio, and Ramsden [66]), the CEQ's bandwagon status remains undiminished.

Another issue that should be of great importance to tertiary educators is the quality of the learning environment that students encounter in universities. This field of research focuses on the psychosocial dimensions of the environment – those aspects that focus on human behaviour in origin or outcome (Boy and Pine [2]). Accordingly, the concept of environment, as applied to educational settings, refers to the atmosphere, ambience, tone, or climate that pervades the particular setting (see Dorman [11]; Fraser [22]). The fundamental question for learning environment researchers is: 'What is it really like for students in this environment?' Importantly, research at all educational levels has shown that the quality of the learning environment is a potent predictor of student cognitive and affective outcomes (see, for example, Fraser [22]). The purpose of this article is to report research linking the field of learning environment research with course experience research. Before reporting the results of this study, a brief review of these two fields is provided.

Background

Research on university course experiences

Reviews of research literature on students' perceptions of their college or university experience have revealed a rapidly-developing research field. Despite these developments, Richardson's [54] review concluded that, while academic staff believe that student feedback in useful and informative, many academic staff and institutions do not take such feedback seriously. Administrative requirements like promotion applications have often been the driving force behind academic staff engaging in course and unit evaluations. Indeed, routine collection of evaluation data does not automatically lead to any improvement in teaching quality (Kember, Leung, and Kwan [36]) and only serious collegial deliberation by teaching staff on the results of evaluations will bring about changes in teaching. Pascarella and Terenzini [51] in the United States, and more recently McInnis [45] and McInnis and James [47] in Australia, have established that social interactions with staff and peers are the key determinants of intellectual and personal development outcomes. Yet, according to McInnis et al. [46], few instruments that purportedly evaluate college and university experiences truly assess these important areas of student life. One study that did assess personal development was reported by Lawless and Richardson [39] and Richardson [55]. They used the CEQ and the Personal and Educational Development Inventory (PEDI) to study experiences of distance education students at the Open University (UK). Correlational analyses revealed that the CEQ and PEDI had significant convergent validity, with over 65% of shared score variance.

One comprehensive instrument that has been widely used in North American research is the Students' Evaluations of Educational Quality (SEEQ: Marsh [43]). It assesses nine dimensions of effective teaching: learning/value, enthusiasm, organisation, group interaction, individual rapport, breadth of coverage, examinations/grading, assignments and workload/difficulty. According to Marsh's original instrument development work, scale internal consistency reliabilities (Cronbach coefficient alpha) ranged from 0.88 to 0.97. Marsh [44] described three older instruments which have well-defined factor structures: Frey, Leonard, and Beatty's ([27]) Endeavour instrument, which assesses presentation clarity, workload, personal attention, class discussion, organisation/planning, grading and student accomplishments; Hildebrand, Wilson, and Dienst's ([32]) Student Description of Teaching, which assesses analytic/synthetic approach, organisation/clarity, instructor–group interaction, instructor–individual interaction and dynamism/enthusiasm; and Warrington's [63] Student Instructional Rating System, which assesses instructor involvement, student interest and performance, student–instructor interaction, course demands and course organisation. While dated, these instruments have sound construct validity and demonstrate that students' evaluations can assess distinct components of teaching effectiveness (Marsh [44]).

As introduced earlier in this article, the focus of the research reported here is the Course Experience Questionnaire (CEQ), which is increasingly being taken as a key performance indicator of universities and departments within Australian universities. The CEQ is designed to assess students' perceptions of five dimensions of course experience: clear goals and standards, generic skills, good teaching, appropriate workload and appropriate assessment. There is also one item that assesses overall course satisfaction. Generic skills and satisfaction have been considered as outcomes in some studies (see, for example, Lizzio, Wilson, and Simons [40]).

Many studies in this field have used instruments in conjunction with the CEQ. Recently, Richardson ([56], [57]) and Price, Richardson, and Jelfs [52] used the Revised Approaches to Studying Inventory (RASI: Entwistle and Tait [17]; Entwistle, Tait, and McCune [18]) and the CEQ to investigate associations between CEQ scale scores and deep, surface and strategic approaches to studying in universities. Richardson's studies found that, whereas all significant correlations between CEQ scale scores and deep and strategic approaches to learning were positive, all correlations between CEQ scale scores and surface approaches to learning were negative. Jelfs and Richardson [34] reported only slight differences in course experiences and approaches to learning and studying by disabled and non-disabled students at the Open University.

A Norwegian study by Diseth et al. [9] developed a structural model in which course experiences predicted student to approaches to studying. More specifically, good teaching predicted deep, strategic and surface approaches; clear goals predicted strategic approaches and appropriate workload predicted deep and surface approaches to studying. In a similar earlier study, Lizzio, Wilson, and Simons [40] modelled deep and surface approaches to learning as possible mediators of the influence of good teaching on cognitive and affective outcomes. Brew and Ginns [3] investigated the relationship between Student Course Experience Questionnaire (SCEQ) results and an 'in-house' scholarship index. The SCEQ is very similar to the CEQ and the scholarship index was designed 'to provide recognition and reward for excellence in teaching and the communication of good practice' (536). Findings revealed a positive relationship between the scholarship of teaching and learning and student course experiences.

The CEQ has been subjected to substantial empirical scrutiny. For example, McInnis et al. [46] validated the items and scales of the CEQ and five additional scales using both Rasch modelling and classical test theory. Curtis and Keeves [6] analysed CEQ data using Rasch modelling and found that 8 of the 25 items did not fit the measurement model. Criticisms concerning the CEQ can be attributed to at least three areas: administration and feedback, methodological limitations and inadequate coverage of the learning experiences in universities. Davies et al. [7] note that the aggregated nature of data and the lag in CEQ feedback being received by institutions makes it difficult for institutions to engage in quality improvement activities. Furthermore, the CEQ is not designed to assess units within courses or the performance of individual academic staff. Yorke [68] and Eley [16] critiqued the design and form of the CEQ, including response style, acquiescence bias, item response formats, and the effect of item order on the questionnaire.

A final criticism of the CEQ is its inadequacy in tapping all relevant dimensions of the learning experiences in colleges and universities. Indeed, the CEQ does not adequately assess the social and psychological contexts in which student learning occurs. If teaching is a multidimensional skill, then obtaining a score on the good teaching scale will not be sufficiently fine-tuned to provide constructive information for teaching staff (Kember and Leung [35]). Another area of importance to tertiary education that is not assessed by the CEQ is the extent to which courses stimulate intellectual debate and challenge. That is, attempts by academic staff to engage students in higher order thinking are not assessed. In essence, within the CEQ, teaching is a technical act in which students are passive recipients and social environments are irrelevant to course experiences. In 1997, Wilson, Lizzio, and Ramsden noted that the quality of facilities (e.g. computing and library) and environment support services (e.g. enrolment advice) are clear omissions of the CEQ. Given the rapid shifts of the past decade towards a consumer-pays approach embedded in mandatory access to technology, this omission is simply unacceptable in any contemporary evaluation of course experiences. The CEQ is based on 1980s theory of curriculum and assessment and its conceptual narrowness is not surprising. What is surprising is the almost universal bandwagon status afforded to it by the Australian tertiary education sector in 2010. For example, the Institute for Teaching and Learning, University of Sydney ([33], 1) states: 'The CEQ has attracted many critics over the years, however it has stood the test of time. Numerous research studies have concluded that the factor scales and the survey items they derive from are valid across repeated administrations to different cohorts'.

Learning environment research

Research on the psychosocial dimensions of classroom environments has made substantial progress during the past 40 years (see Dorman [11], [12]; Fraser [22]). This research has focused mainly on the atmosphere or climate that pervades the particular setting. The strong tradition of classroom environment research has been to conceptualise environments in terms of Murray's [49]beta press – the perceptions of the milieu inhabitants (i.e. students and teachers) – with instruments assessing particular dimensions of the environment (e.g. personalisation). Moos's conceptualisation of human environments as having relationship, personal growth, system maintenance and change dimensions has guided research in this field since the late 1960s (see Moos and Trickett [48]). Vivid descriptions and images of schools through powerful movies (e.g. To Sir, with Love) and less powerful dramatisations (e.g. Beverly Hills 90210) all attest to the centrality of the environment to the defining character of schools and classrooms.

Reviews of classroom environment research by Dorman [11] and Fraser [22] and edited books by Khine and Fisher [37] and Fisher and Khine [20] have reported research on the assessment, determinants and outcomes of learning environments. These studies have included comparisons of actual and preferred environments, the effects of determinants or antecedents on classroom environment (e.g. student gender, year, subject and school type), associations between classroom environment and outcomes, transition from primary to secondary school, evaluation of educational innovations, assessment of pre-service teacher education courses, differences between students' and teachers' perceptions of classrooms, and using environment instruments to alter classroom life. From a methodological perspective, articles by Dorman [12] and Dorman and Fraser [14] have demonstrated the effective use of structural equation modelling and multilevel analysis/hierarchical linear modelling when analysing learning environment data.

Studying associations between classroom environment and student outcomes has been the most prolific area of learning environment research. Studies have substantiated the clear link between the quality of the classroom environment and student cognitive and affective outcomes. In China, Wei, den Brok, and Zhou [65] used the Questionnaire on Teacher Interaction (Wubbels and Levy [67]) to establish links between teachers' interpersonal behaviour and student fluency in English in secondary schools. Another Asian study involving secondary school Singaporean students revealed links between the computer-mediated project-based classroom environment and attitudes toward project-based work (Seet and Quek [58]). Madu and Fraser [42] established positive associations between teachers' interpersonal behaviour, learning environments and student outcomes in five secondary schools in New York. Waldrip, Fisher, and Dorman [62] used assessments of the classroom environment to identify exemplary science teachers.

Over the past 40 years, many classroom environment instruments have been developed, validated and used in a range of educational settings. Some of the main instruments include the Classroom Environment Scale (CES: Moos and Trickett [48]), the Learning Environment Inventory (LEI: Fraser, Anderson, and Walberg [23]), the College and University Classroom Environment Inventory (CUCEI: Fraser [21]) and the What Is Happening In this Class (WIHIC: Fraser [22]). Additionally, context-specific instruments have been used in specific settings or for specific purposes. For example, Waxman et al. [64] recently developed a learning environment instrument to evaluate teachers' professional development.

In the 1980s, Fraser and Treagust [25] developed the CUCEI to assess the environment in classrooms in colleges and universities. This research broke new ground as previous university environment research had focused on the institutional-level environment. Fraser and Treagust reported the formulation of seven scales (personalisation, involvement, student cohesiveness, satisfaction, task orientation, innovation and individualisation) with very sound internal consistency reliabilities. Later, Fraser, Giddings, and McRobbie [24] researched the science laboratory environment in universities with a context-specific instrument called the Science Laboratory Environment Inventory. Despite these developments, no substantive research agenda on university classroom environment has been sustained during the past decade.

Some areas of recent classroom environment research include validating and using a modified Arabic translation of the WIHIC in the United Arab Emirates (MacLeod and Fraser [41]), studying the interpersonal behaviour styles of primary education teachers during science lessons (Fisher et al. [19]), using classroom psychosocial environment assessments in technologically-integrated pre-service teacher education courses in Singapore (Khine and Fraser [38]), investigating sports class learning environments (Dowdell, Tomson, and Davies [15]), and studying metacognitive aspects of the classroom environment in chemistry classes (Thomas and Anderson [61]). Classroom environment research has significant overlap with student engagement research (see Fredricks, Blumenfeld, and Paris [26]), and research by Cavanagh and Dellar [4] identified three major classroom environment influences on student engagement: relationships with classmates and the teacher, an orientation towards learning individually, and student confidence.

The present study

The research questions for this study were:

  • Can scales to assess classroom environment be validated with a sample of pre-service teacher education students in an Australian multicampus university? and
  • What is the relationship between students' perceptions of the classroom environment and their Course Experience Questionnaire scores?
Sample

The sample used in this study consisted of 495 students from three campuses of one Australian public university. As shown in Table 1, most students in the two primary school preservice Bachelor of Education courses were female. However, this gender imbalance was reversed in the Bachelor of Arts/Bachelor of Teaching secondary school course. The data are hierarchical, with students nested in courses within the university.

Table 1. Description of sample.

Course
GenderBEducation (primary school, undergraduate course)BEducation (primary school, postgraduate course)BArts/BTeaching (secondary school, postgraduate course)Total
Male5143994
Female2713599401
Total32235138495

Assessment of course experiences and learning environment

Students responded to the Course Experience Questionnaire (CEQ: Ramsden [53]) and a version of the College and University Classroom Environment Inventory (CUCEI: Fraser [21]) The CEQ assesses students' perceptions of five dimensions of course experience: clear goals and standards, generic skills, good teaching, appropriate workload and appropriate assessment. The version of the CEQ used in the present study had 25 items. Each item employs a 5-point Likert response format (strongly disagree = 1, disagree = 2, not sure = 3, agree = 4, strongly agree = 5) with item scores aggregated to form scale scores for each respondent. The CUCEI is an established classroom environment instrument which assesses personalisation, involvement, student cohesiveness, satisfaction, task orientation, innovation and individualisation in university classes. In the present study, five items from each CUCEI scale were chosen. As for the CEQ, a 5-point Likert response format is used with the CUCEI item scores aggregated to form scale scores for each respondent. Table 2 shows a description of each CEQ and CUCEI scale.

Table 2. Descriptive information and validation data for five CEQ and seven CUCEI scales.

ScaleDescriptionNo. of itemsCoefficient αMSD
CEQ
Clear Goals & StandardsThe extent to which clear goals and standards are evident in the course.4.5912.702.59
Generic SkillsThe extent to which university courses add to the generic skills that their graduates might be expected to possess.6.7622.173.28
Good TeachingThe extent to which teaching supports practices like providing students with feedback on their progress, explaining things, making the course interesting, motivating students, and understanding students' problems.6.8118.964.26
Appropriate WorkloadThe extent to which students have reasonable workloads.3.707.662.33
Appropriate AssessmentThe extent to which assessment emphasises recall of factual information rather than higher order thinking.3.539.842.00
CUCEI
PersonalisationThe extent to which opportunities exist for individual students to interact with the instructor and for the instructor to be concerned about students' personal welfare.5.8116.363.41
InvolvementThe extent to which students participate actively and attentively in class discussions and activities.5.7518.332.36
Student CohesivenessThe extent to which students know, help and are friendly towards each another.5.9021.582.88
SatisfactionThe extent to which students enjoy classes.5.8619.183.58
Task OrientationThe extent to which class activities are clear and well organised.5.7818.053.27
InnovationThe extent to which the instructor plans new, unusual class activities, teaching techniques and assignments.5.7719.413.30
IndividualisationThe extent to which students are allowed to make decisions and are treated differentially according to ability, interest, and rate of working.5.7815.373.85

Note: As CEQ documents do not explicitly state scale descriptions, these descriptions have been developed from scale items.

Data analysis

To validate the structure of the CUCEI and CEQ scales, exploratory factor analyses and reliability analyses were conducted. Multilevel regression analyses for each CEQ scale as a response variable and the set of classroom environment scales as explanatory variables were performed with MLwiN (Goldstein [28]). The purpose of these analyses was to examine which dimensions of the classroom environment influenced course experiences. Multilevel analyses were conducted because students are nested in courses. Student was the Level 1 variable and course was the Level 2 variable. Backward elimination of non-significant variables was conducted (p < .05). That is, all classroom environment scales were entered into regression equations and non-significant scales were removed. Parameter estimates and effect sizes for all significant explanatory variables were computed. Effect sizes provide an indication of the strength of the relationship between a classroom environment scale and a course experience scale. Correlations between explanatory and response variables were used to calculated effect sizes (ES) using Cohen's [5] formula r2 = d2/(4 + d2) where r is the Pearson correlation and d is the effect size.

Results

Validation of CEQ and CUCEI scales

A principal components factor analysis with varimax rotation revealed that 22 of the 25 CEQ items loaded on five factors, which accounted for 52.54% of variance in item scores. These factors corresponded to the five a priori scales of the CEQ. Factor loadings ranged from.49 to.78 (M = .66, SD = .08). All items had loadings of less than.30 with the remaining four factors. Despite this sound factor structure, some scales of the CEQ had marginal internal consistency reliability in the present study (see Table 2). For example, the Clear Goals and Standards scale had a Cronbach coefficient alpha of 0.59 – indicating only fair reliability, especially for a scale of such a well-regarded instrument. Table 2 also shows means and standard deviations for these scales.

Results confirmed the psychometric structure of the CUCEI. An exploratory factor analysis with varimax rotation indicated a seven-factor structure corresponding to the seven a priori scales of the CUCEI. The principal components analysis extracted 59.45% of variance in item scores. Factor loadings on a priori scales ranged from.34 to.88 (M = .76, SD = .15). Internal consistency reliability indices (Cronbach coefficient α) ranged from.75 for the Involvement scale to.90 for Student Cohesiveness (see Table 2). These data compare favourably with previous CUCEI validation data reported in Fraser [21] and Fraser and Treagust [25] and provide a sound basis for using CUCEI scales in subsequent analyses.

Multilevel analyses

Variance components models for each of the five CEQ scales indicated that almost all of the variance in CEQ scores was at the student level. In fact, the proportion of total variance accounted for by course membership ranged from 0.00% for the generic skills scale to 2.67% for the good teaching scale. Despite these small proportions, multilevel analysis was the preferred approach to data analysis. Results of analyses conducted with MLwiN revealed significant associations between some dimensions of the classroom environment and students' scores on the CEQ. Table 3 shows these significant explanatory variables, their estimates in these multilevel analyses, and associated effect sizes. Clear Goals and Standards had four significant explanatory variables: personalisation, student cohesiveness, task orientation and individualisation. Effect sizes (ES) shown in Table 3 reveal that task orientation was by far the strongest predictor of clear goals and standards with a very large effect size (1.53). Large effect sizes were evident for personalisation and individualisation (0.77 and 0.78 respectively). As expected, the direction of these three effects was positive, indicating that increased levels of task orientation, personalisation and individualisation were associated with more positive perceptions of clear goals and standards.

Table 3. Results of multilevel models for CEQ scales as response variables and CUCEI scales as explanatory variables.

Response variableExplanatory variablesMultilevel analysis estimatesEffect sizes
Clear Goals & StandardsPersonalisation0.090.77
Student Cohesiveness−0.110.01
Task Orientation0.441.53
Individualisation0.060.78
Generic SkillsPersonalisation0.140.71
Satisfaction0.241.01
Task Orientation0.190.92
Innovation0.120.64
Good TeachingPersonalisation0.461.42
Student Cohesiveness−0.110.14
Satisfaction0.211.02
Task Orientation0.241.20
Individualisation0.261.17
Appropriate WorkloadStudent Cohesiveness−0.110.13
Satisfaction0.100.42
Task Orientation0.110.50
Innovation−0.080.02
Individualisation0.110.55
Appropriate AssessmentTask Orientation0.070.23

Note: Only explanatory variables statistically significant at p < .05 are listed.

The most potent predictors of Generic Skills were satisfaction and task orientation (ES = 1.01 and 0.92 respectively). Higher levels of satisfaction and task orientation were associated with improved generic skills. Personalisation was the strongest explanatory variable for the Good Teaching scale (ES = 1.42). Other variables that had a very large effect on good teaching were individualisation, task orientation and satisfaction. All four of these classroom environment variables had a significant positive effect on good teaching. Key explanatory variables of Appropriate Workload were satisfaction, task orientation and individualisation, with moderate effect sizes. Finally, task orientation was the only significant explanatory variable for Appropriate Assessment. A small effect size was evident (0.23). It is particularly noteworthy that task orientation was the only explanatory variable that featured in all multilevel models reported in Table 3. Additionally, the effects of task orientation on three of the five CEQ scales (namely, Clear Goals and Standards, Generic Skills, and Good Teaching) were very large.

Discussion

Substantive issues

Previous research linking classroom environment with performance on the CEQ in Australian universities has not been reported. There are at least three substantive implications to be drawn from this study. First, more positive classroom environments are associated with better CEQ scores. This is a plausible finding which is significant because the CEQ has been criticised for its narrow conceptualisation of course experiences. As noted earlier, the CEQ does not attempt to assess the psychosocial environment that students experience in university classrooms. This environment has been shown to be important to students' success at university (Pascarella and Terenzini [51]). Having shown that environment and CEQ scores are linked, this study provides greater confidence in CEQ scores as reflecting a broader picture of course experiences. This study suggests that university staff should pay closer attention to the quality of the psychosocial learning environment in classrooms if they wish to improve students' CEQ scores. Furthermore, university and faculty leaders should employ much more comprehensive assessment instruments than the CEQ when assessing course experiences. The CEQ should form one component of these assessments.

Second, relationship and personal growth dimensions were the key dimensions of the classroom environment that predicted CEQ scores. University staff need to recognise and respond to the importance of these dimensions. Further research on the interaction between staff and students in university environments is highly desirable. This research could be based on teacher–student interactions research work pioneered in the Netherlands 25 years ago and which has been conducted in many western countries. Much of this research has employed the Questionnaire on Teacher Interaction (QTI: Wubbels and Levy [67]) and its derivatives to provide an assessment of eight dimensions of interpersonal behaviour. These eight dimensions are conceptualised as having components on two orthogonal axes: proximity (cooperation–opposition) and influence (dominance–submission) (see Henderson and Fisher [31]; Madu and Fraser [42]; Telli, den Brok, and Cakiroglu [60]).

Third, task orientation was the key classroom environment scale in this study. It was the only scale that predicted all five CEQ scales. The importance of task orientation in predicting these outcomes scales is similar to findings of much environment–outcome research conducted in schools. Dorman's [10] classroom environment study involving 1055 Grade 8, 10 and 12 students in Australia found that, of 10 classroom environment scales, task orientation was the most potent predictor of student academic efficacy. Similarly, Allen and Fraser's ([1]) study in the United States revealed task orientation to be the only classroom environment scale to predict enjoyment of science lessons and final grade. Zandvliet and Fraser [69] found strong associations between task orientation and satisfaction with a sample of 1404 high school students in Canada and Australia. Of the eight classroom environment scales employed in their study, task orientation was the strongest predictor of satisfaction. In another environment–outcome study conducted by Ogbuehi and Fraser [50] in California, task orientation was the strongest predictor of two attitudinal scales: normality of mathematicians and enjoyment of mathematics. Overall, this present study's finding that task orientation predicts CEQ scores is consistent with much previous research. Clearly, a range of outcome measures (including CEQ scores in the present study) are responsive to changes in classroom task orientation – the extent to which class activities are clear and well organised.

Methodological issues

There are four methodological issues relating to the research reported in this article. First, there are limitations to this study based on the student sample. As described earlier in this article, this study involved teacher education students in one Australian public university. As such, its results should not be generalised to all university students. Clearly, further research using a larger sample of students from a wider sample of courses in a sample of universities is needed. Cross-national research would also be desirable.

Second, one of the strengths of this study was the use of multilevel analysis to analyse data. This approach was adopted because students were nested in courses. Historically, classroom environment researchers have used either the individual student as the unit of analysis and ignored class membership or the class as the unit of analysis and ignored intraclass variation in scores. Both of these approaches have been shown to be problematic (see Goldstein [28]; Snijders and Bosker [59]). Dorman [13] demonstrated the effect of clustering on the results of statistical tests. Multilevel analysis in which the hierarchical nature of the data is preserved is the correct approach in most environment studies.

Third, it is clear that the CEQ falls well short of being a perfect measure of students' perceptions of their course experiences. While it has sound construct validity, its scales do not encapsulate course experiences in contemporary universities. Without scales to assess the psychosocial environment, it cannot be an adequate measure of university course experience in any era. An intuitive-rational approach to scale development which takes into account the subjective opinions of the researcher and stakeholders is more appropriate to the development of an authentic course experience questionnaire (see Hase and Goldberg [30]). This contrasts with the factor analytic approach, which relies entirely on the factor structure of the scores after field testing. Additionally, it could be argued that the CUCEI is not a complete measure of psychosocial environment. While both instruments have been used for over 20 years in college and university settings, this does not mean that their validity should be accepted without question.

Finally, this research employed an ex-post facto design with correlational analyses. As such, no causation can be implied. However, the body of correlational studies linking psychosocial environment in classrooms to cognitive and affective outcomes is substantial and this study's results add to our confidence in postulating a causal path from psychosocial environment to outcomes.

Conclusion

Research conducted over the past 40 years has shown the classroom psychosocial environment to be a strong predictor of cognitive and affective outcomes (see Fraser [22]). This study has broken new ground by reporting links between classroom environment and CEQ scores in universities. To date, no Australian research had been conducted in this area. Results suggest that improvements in the classroom environment can not only be beneficial to student learning, but can also improve institutional performance on accountability measures like the CEQ. It is recommended that academic staff in universities give more attention to classroom environment and in particular the relationship and personal growth dimensions of the environment. The findings of this study should be considered tentative, and to enhance the confidence in these associations between environment and external outcome measures, further studies involving a wider sample of students in other colleges and universities and in other countries should be conducted.

References 1 Allen, D., and B. J. Fraser 2007. Parent and student perceptions of classroom learning environment and its association with student outcomes. Learning Environments Research 10, 67–82. doi: 10.1007/s10984-007-9018-z 2 Boy, A. V., and G. J. Pine 1988. Fostering psychosocial development in the classroom. Springfield, IL: Charles C. Thomas. 3 Brew, A., and P. Ginns 2008. The relationship between engagement in the scholarship of teaching and learning and students' course experiences. Assessment & Evaluation in Higher Education 33, 535–45. doi: 10.1080/02602930701698959 4 Cavanagh, R. F., G. B. Dellar. 2010. The influence of the classroom psycho-social learning environment on student engagement in classroom learning. Paper presented at the annual meeting of the American Educational Research Association, May, in Denver, USA. 5 Cohen, J. 1988. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ: Erlbaum. 6 Curtis, D. D., and J. P. Keeves 2000. The Course Experience Questionnaire as an institutional performance indicator. International Education Journal 1, no. 2: 73–82. 7 Davies, M., J. Hirschberg, J. Lye, and C. Johnston 2010. A systematic analysis of quality of teaching surveys. Assessment & Evaluation in Higher Education 35, 87–100. doi: 10.1080/02602930802565362 8 Department of Education, Science and Training 2001. Characteristics and Performance Indicators of Australian Higher Education Institutions, 2000. http://www.dest.gov.au/NR/rdonlyres/6812B85F-8C4E-415D-9240-AF3C35AFFB77/947/characteristics00.pdf (accessed April 21, 2009). 9 Diseth, A., S. Pallesen, A. Hovland, and S. Larsen 2006. Course experience, approaches to learning and academic achievement. Education and Training 48, 156–69. doi: 10.1108/00400910610651782 Dorman, J. P. 2001. Associations between classroom environment and academic efficacy. Learning Environments Research 4, 243–57. doi: 10.1023/A:1014490922622 Dorman, J. P. 2002. Classroom environment research: Progress and possibilities. Queensland Journal of Educational Research 18, 112–40. Dorman, J. P. 2008. Determinants of classroom environment in Queensland secondary schools: A multilevel reanalysis. Educational Research and Evaluation 14, 429–44. doi: 10.1080/13803610802337640 Dorman, J. P. 2012. The impact of student clustering on the results of statistical tests. In Second international handbook of science education, ed. B. J. Fraser, K. G. Tobin and C. J. Mc Robbie, vol. 2. 1333–48. Dordrecht: Springer. Dorman, J. P., and B. J. Fraser 2009. Psychosocial environment in technology-rich high school classrooms: Testing a causal model. Social Psychology of Education 12, 77–99. doi: 10.1007/s11218-008-9069-8 Dowdell, T., L. M. Tomson, and M. Davies 2011. Measuring sports class learning climates: The development of the Sports Class Environment Scale. Learning Environments Research 14, 123–33. doi: 10.1007/s10984-011-9086-y Eley, M. 2001. The Course Experience Questionnaire: Altering question format and phrasing could improve the CEQ's effectiveness. Higher Education Research and Development 20, 293–312. doi: 10.1080/07294360127208 Entwistle, N. J., and H. Tait 1994. The Revised Approaches to Studying Inventory. Edinburgh: Centre for Research into Learning and Instruction, University of Edinburgh. Entwistle, N., H. Tait, and V. McCune 2000. Patterns of response to an approaches to studying inventory across contrasting groups and contexts. European Journal of Psychology of Education 15, 33–48. doi: 10.1007/BF03173165 Fisher, D. L., P. den Brok, B. G. Waldrip, and J. P. Dorman 2011. Interpersonal behaviour styles of primary education teachers during science lessons. Learning Environments Research 14, 187–204. doi: 10.1007/s10984-011-9093-z Fisher, D. L., and M. S. Khine, eds. 2006. Contemporary approaches to research on learning environments. Singapore: World Scientific. Fraser, B. J. 1998. Classroom environment instruments: Development, validity, and applications. Learning Environments Research 1, 7–33. doi: 10.1023/A:1009932514731 Fraser, B. J. 2012. Classroom learning environments: Retrospect, context and prospect. In Second international handbook of science education, ed. B. J. Fraser, K. G. Tobin and C. J. Mc Robbie, vol. 2. 1191–1239. Dordrecht: Springer. Fraser, B., G. J. Anderson, and H. J. Walberg 1982. Assessment of learning environments: Manual for Learning Environment Inventory (LEI) and My Class Inventory (MCI). 3rd ed. Perth: Western Australian Institute of Technology. Fraser, B. J., G. J. Giddings, and C. J. McRobbie 1992. Assessment of the psychosocial environment of university science laboratory classrooms: A cross-national study. Higher Education 24, 431–51. doi: 10.1007/BF00137241 Fraser, B. J., and D. F. Treagust 1986. Validity and use of an instrument for assessing classroom psychological environment in higher education. Higher Education 15, 37–57. doi: 10.1007/BF00138091 Fredricks, J. A., P. C. Blumenfeld, and A. H. Paris 2004. School engagement: Potential of the concept, state of the evidence. Review of Educational Research 74, 59–109. doi: 10.3102/00346543074001059 Frey, P. W., D. W. Leonard, and W. W. Beatty 1975. Student ratings of instruction: Validation research. American Educational Research Journal 12, 327–36. doi: 10.3102/00028312012004435 Goldstein, H. 2003. Multilevel statistical models. London: Edward Arnold. Graduate Careers Council Australia 2009. The CEQ and the PREQ. http://www.graduatecareers.com.au/content/view/full/870 (accessed April 21, 2009). Hase, H. D., and L. G. Goldberg 1967. Comparative validity of different strategies of constructing personality inventory scales. Psychological Bulletin 67, 231–48. doi: 10.1037/h0024421 Henderson, D. G., and D. L. Fisher 2008. Interpersonal behaviour and student outcomes in vocational education classes. Learning Environments Research 11, 19–29. doi: 10.1007/s10984-007-9034-z Hildebrand, M., R. C. Wilson, and E. R. Dienst 1971. Evaluating university teaching. Berkeley, CA: Center for Research and Development in Higher Education, University of California, Berkeley. Institute for Teaching and Learning, University of Sydney 2009. How valid is the CEQ?, http://www.itl.usyd.edu.au/ceq/faq.htm#q6? (accessed March 27, 2009). Jelfs, A., and J. T.E. Richardson 2010. Perceptions of academic quality and approaches to studying among disabled and non-disabled students in distance education. Studies in Higher Education 35, 593–607. doi: 10.1080/03075070903222666 Kember, D., and D. Y.P. Leung 2009. Development and validation of a questionnaire for assessing students' perceptions of the teaching and learning environment and its use in quality assurance. Learning Environments Research 12, 15–29. doi: 10.1007/s10984-008-9050-7 Kember, D., D. Y.P. Leung, and K. P. Kwan 2002. Does the use of student feedback questionnaires improve the overall quality of teaching?. Assessment and Evaluation in Higher Education 27, 411–25. doi: 10.1080/0260293022000009294 Khine, M. S., and D. L. Fisher, eds. 2003. Technology-rich learning environments: A future perspective. Singapore: World Scientific. Khine, M. S., B. J. Fraser. 2010. Preservice teachers' perceptions of technology-integrated learning environments: Assessing patterns of teaching and learning. Paper presented at the annual meeting of the American Educational Research Association, May, in Denver, USA. Lawless, C., and J. T.E. Richardson 2004. Monitoring the experiences of graduates in distance education. Studies in Higher Education 29, 353–74. doi: 10.1080/03075070410001682628 Lizzio, A., K. Wilson, and R. Simons 2002. University students' perceptions of the learning environment and academic outcomes: Implications for theory and practice. Studies in Higher Education 27, 27–52. doi: 10.1080/03075070120099359 MacLeod, C., and B. J. Fraser 2010. Development, validation and application of a modified Arabic translation of the What Is Happening In this class? (WIHIC) questionnaire. Learning Environments Research 13, 105–25. doi: 10.1007/s10984-008-9052-5 Madu, N. E., B. J. Fraser. 2009. Associations between teacher's interpersonal behaviour, learning environment and students' outcomes. Paper presented at the annual meeting of the American Educational Research Association, April, in San Diego, USA. Marsh, H. W. 1982. SEEQ: A reliable, valid and useful instrument for collecting students' evaluations of university teaching. British Journal of Educational Psychology 52, 77–95. doi: 10.1111/j.2044-8279.1982.tb02505.x Marsh, H. W. 1987. Students' evaluations of university teaching: Research findings, methodological issues and directions for future research. International Journal of Educational Research 11, 253–388. doi: 10.1016/0883-0355(87)90001-2 McInnis, C. 1997. Defining and assessing the student experience of the quality management process. Tertiary Education and Management 3, 63–71. doi: 10.1080/13583883.1997.9966908 McInnis, C., P. Griffin, R. James, and H. Coates 2001. Development of the Course Experience Questionnaire. Canberra: Department of Education and Youth Affairs. McInnis, C., and R. James 1995. First year on campus. Canberra: Australian Government Publishing Service. Moos, R. H., and E. J. Trickett 1987. Classroom environment scale manual. 2nd ed. Palo Alto, CA: Consulting Psychologists Press. Murray, H. A. 1938. Explorations in personality. New York: Oxford University Press. Ogbuehi, P. I., and B. J. Fraser 2007. Learning environment, attitudes and conceptual development associated with innovative strategies in middle-school mathematics. Learning Environments Research 10, 101–14. doi: 10.1007/s10984-007-9026-z Pascarella, E., and P. Terenzini 1991. How college affects students. San Francisco: Jossey-Bass. Price, L., J. T.E. Richardson, and A. Jelfs 2007. Face-to-face versus online tutoring support in distance education. Studies in Higher Education 32, 1–20. doi: 10.1080/03075070601004366 Ramsden, P. 1991. A performance indicator of teaching quality in higher education: The Course Experience Questionnaire. Studies in Higher Education 16, 129–50. doi: 10.1080/03075079112331382944 Richardson, J. T.E. 2005. Instruments for obtaining student feedback: A review of the literature. Assessment & Evaluation in Higher Education 30, 387–415. doi: 10.1080/02602930500099193 Richardson, J. T.E. 2009a. The attainment and experiences of disabled students in distance education. Distance Education 30, 87–102. doi: 10.1080/01587910902845931 Richardson, J. T.E. 2009b. Face-to-face versus online tutoring support in humanities courses in distance education. Arts and Humanities in Higher Education 8, 69–85. doi: 10.1177/1474022208098303 Richardson, J. T.E. 2010. Perceived academic quality and approaches to studying in higher education: Evidence from Danish students of occupational therapy. Scandinavian Journal of Educational Research 54, 189–203. doi: 10.1080/00313831003637972 Seet, L. Y.B., and C. L. Quek 2010. Evaluating students' perceptions and attitudes toward computer-mediated project-based learning environment: A case study. Learning Environments Research 13, 173–85. doi: 10.1007/s10984-010-9073-8 Snijders, T. A.B., and R. J. Bosker 1999. Multilevel analysis: An introduction to basic and advanced multilevel modeling. London: Sage. Telli, S., P. den Brok, and J. Cakiroglu 2007. Students' perceptions of science teachers' interpersonal behaviour in secondary schools: Development of a Turkish version of the Questionnaire on Teacher Interaction. Learning Environments Research 10, 115–29. doi: 10.1007/s10984-007-9023-2 Thomas, G. P., D. Anderson. 2010. Changing the metacognitive orientation of a classroom environment to enhance students' metacognition regarding chemistry learning. Paper presented at the annual meeting of the American Educational Research Association, May, in Denver, USA. Waldrip, B. G., D. L. Fisher, and J. P. Dorman 2009. Identifying exemplary science teachers through students' perceptions of their learning environment. Learning Environments Research 12, 1–13. doi: 10.1007/s10984-008-9049-0 Warrington, W. G. 1973. Student evaluation of instruction at Michigan State University. In Proceedings: The first invitational conference on faculty effectiveness as evaluated by students, ed. A. L. Sockloof, 164–82. Philadelphia: Measurement and Research Center, Temple University. Waxman, H. C., K. Sparks, J. Stillisano, Y. H. Lee. 2009. The development and use of a learning environment instrument to evaluate teachers' professional development. Paper presented at the annual meeting of the American Education Research Association, April, in San Diego, USA. Wei, M., P. den Brok, and Y. Zhou 2009. Teacher interpersonal behaviour and student achievement in English as a foreign language classrooms in China. Learning Environments Research 12, 157–74. doi: 10.1007/s10984-009-9059-6 Wilson, K. L., A. Lizzio, and P. Ramsden 1997. The development, validation and application of the Course Experience Questionnaire. Studies in Higher Education 22, 33–53. doi: 10.1080/03075079712331381121 Wubbels, T., and J. Levy 1993. Do you know what you look like? Interpersonal relationships in education. London: Falmer Press. Yorke, M. 2009. 'Student experience' surveys: Some methodological considerations and an empirical investigation. Assessment & Evaluation in Higher Education 34, 721–39. doi: 10.1080/02602930802474219 Zandvliet, D. B., and B. J. Fraser 2005. Physical and psychosocial environments associated with networked classrooms. Learning Environments Research 8, 1–17. doi: 10.1007/s10984-005-7951-2

By Jeffrey P. Dorman

Reported by Author

Titel:
Classroom Psychosocial Environment and Course Experiences in Pre-Service Teacher Education Courses at an Australian University
Autor/in / Beteiligte Person: Dorman, Jeffrey P.
Link:
Zeitschrift: Studies in Higher Education, Jg. 39 (2014), Heft 1, S. 34-47
Veröffentlichung: 2014
Medientyp: academicJournal
ISSN: 0307-5079 (print)
DOI: 10.1080/03075079.2012.674936
Schlagwort:
  • Descriptors: Classroom Environment Student Attitudes Foreign Countries Questionnaires Measures (Individuals) Learning Experience Preservice Teacher Education Scaling Validity Factor Analysis Scores Predictor Variables Multiple Regression Analysis
  • Geographic Terms: Australia
Sonstiges:
  • Nachgewiesen in: ERIC
  • Sprachen: English
  • Language: English
  • Peer Reviewed: Y
  • Page Count: 14
  • Document Type: Journal Articles ; Reports - Research
  • Education Level: Higher Education ; Postsecondary Education
  • Abstractor: As Provided
  • Number of References: 69
  • Entry Date: 2014

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -