Zum Hauptinhalt springen

MODIFYING INSTRUCTION WITHIN TIERS IN MULTITIERED INTERVENTION PROGRAMS

KUPZYK, Sara ; DALY, Edward J ; et al.
In: Addressing Response to Intervention Implementation: Questions from fhe Field, Jg. 49 (2012), Heft 3, S. 219-230
Online academicJournal - print; 12; 1 p.3/4

Modifying instruction within tiers in multitiered intervention programs. 

Response to Intervention provides a continuum of instruction across intensity levels through multitiered intervention models. A lot of work to date has been devoted to how to configure tiers to ensure the appropriate increases in intensity. Much less work has been devoted to making adjustments within tiers to attempt to forestall the need for moving students to a more intense level of instruction when the student is not making adequate progress. This article provides a simple model for evaluating the current instruction to look for areas in which it can be adjusted before more restrictive measures are taken. The model draws from the literature on functional assessment of academic performance. Teachers and consultants are advised to check (a) the skills targeted for instruction, (b) guided practice, (c) independent practice, (d) implementation fidelity, and (e) the motivating conditions that are present during instruction. The role of each area in student learning and progress is discussed, and recommendations are made for adjustments. © 2012 Wiley Periodicals, Inc.

Multitiered intervention models such as Response to Intervention (RtI) are designed to enhance students' learning rates and skill development across the full continuum of student ability levels, from those with the weakest skills to those with the strongest skills. Common features that characterize RtI models include: (a) screening of all students to identify those who need more or different types of instruction; (b) use of data and objective decision rules to inform instructional placement; (c) provision of high‐quality, evidence‐based instruction matched to student needs; and (d) ongoing progress monitoring using reliable and valid indicators of skill proficiency to determine the effectiveness of the instruction for individual students (National Association of State Directors of Special Education, [38]). Unfortunately, as is the case with most educational innovations, achieving conceptual clarity about what needs to be done is much simpler than grappling with the realities of actual implementation in schools. For example, many schools are not prepared to use assessment data to make instructional decisions without external supports guiding their efforts (Gersten & Dimino, [20]). Furthermore, teachers' knowledge and skill in using data and implementing interventions varies considerably from teacher to teacher (Piasta, McDonald Connor, Fishman, & Morrison, [42]), meaning that some teachers do not know what changes should be made to match instruction to a student's actual skill level.

In a multitiered intervention system, screening and progress‐monitoring data are used to make decisions about student placement across tiers of instructional intensity. For example, Tier 1 in reading may be regular classroom instruction in a core curriculum, Tier 2 may be a standard intervention protocol that increases the length and intensity of reading instruction, and Tier 3 may be more individualized instruction (e.g., special education). The RtI literature is replete with recommendations for ways to configure tiers and select curriculum and intervention packages (e.g., Marston, [33]; Shinn, [48]). The overarching goal of RtI is to create a fluid and flexible continuum of services to maximize all students' progress. Each tier must provide the highest quality instruction for the resources that are devoted to it, with intensity of instruction increasing as a student is moved to higher tiers. If a student fails to progress at a tier, a more intense (and therefore more restrictive and costly) form of instruction must be delivered at a higher tier. In other words, the nonresponsive student is placed in a higher tier.

Identifying actual prevalence rates of students who do not respond to instruction ("nonresponders") is very difficult because results vary by the criteria and measures used, quality, type, and intensity of instruction across studies, curricular materials, length of instructional sessions and phases, and other such variables (Fuchs, Fuchs, & Compton, [19]; McMaster, Fuchs, Fuchs, & Compton, [35]). However, among students who are at risk for reading failure, high percentages may be persistent nonresponders. For example, Vaughn, Linan‐Thompson, and Hickman (2003) found that almost a quarter of students in their study failed to respond to intense supplemental instruction, whereas McMaster et al. ([35]) found that 70% of nonresponders in their study failed to progress adequately in one of their supplemental interventions. As such, reducing the number of students who fail to succeed at a tier should also be a goal of any RtI model. Successful modifications within a tier (i.e., resulting in adequate student progress) may save time and resources and keep the student in the least restrictive placement. Strangely, this topic has rarely been discussed in the RtI literature. Therefore, the purpose of this article is to discuss simple methods for strengthening instruction within tiers, regardless of the tier at which instruction is being carried out. To accomplish this purpose, intervention strategies drawn from the literature regarding functional assessment of academic skills that can be applied in multitiered intervention models are outlined.

Due to space restrictions, several assumptions had to be made to be able to devote sufficient detail to the strategies discussed in this article. First, one assumption is that the assessment mechanisms of RtI (i.e., screening and progress monitoring) are in place and that teachers therefore have data indicating a lack of student progress within a tier, but that the problem is not chronic or severe enough yet to move to a higher tier. Second, the assumption is made that the focus of intervention is basic academic skills. Although RtI is being expanded to other areas such as behavior and secondary education (e.g., Myers, Simonsen, & Sugai, [37]; Vaughn et al., [53]), the emphasis of this article will be on remediating basic skills. This is both the most common area for intervention (and often the most necessary) and the area in which available strategies are most abundant. Finally, although the suggested strategies target how teachers can change instruction to maximize its benefits, we fully recognize and anticipate that many different types of individuals, including school psychologists, may be consulting with the teacher about how to change instruction for a particular student as a part of an organizational RtI implementation team. Nonetheless, many of the complexities of the consultation process will not be discussed.

ALTERABLE COMPONENTS OF INSTRUCTION

The critical importance of progress‐monitoring data using valid and sensitive indicators of academic skills in any RtI system reflects an underlying focus on alterable variables that can be expected to change student behavior, factors over which teachers have direct control (Howell & Nolet, [28]). One might say that the assessment results diagnose not the child, but the effectiveness of instruction itself (Englemann, Granzin, & Severson, [16]). It is essential, therefore, to target the most important and robust alterable components of instruction. In a classic article on the topic, Lentz and Shapiro ([32]) called for a functional assessment of the academic environment using already well‐developed technologies for observation and analysis. They outlined a number of strategies for improving academic engagement that can be applied to either the "immediately impinging ecology" of instruction (p. 349; i.e., what is actually being done during instruction) or pre‐ and post‐work events. Factors immediately affecting the student during instruction include the sequence of skill instruction, sources of positive reinforcement during work time (e.g., teacher attention), frequency of student responding, the pace of instruction, error correction, and providing frequent feedback. Pre‐ and post‐work variables include instructions before work begins and consequences following the completion of work (e.g., feedback to others such as parents and access to preferred activities).

Lentz, Allen, and Ehrhardt (1996) extended this work by describing elements of "strong interventions" for academic performance problems. The components of strong interventions according to Lentz et al. include opportunities to respond, positive reinforcement following work completion, immediate feedback about accuracy following performance, use of progress monitoring for decision making, and instruction characterized by appropriate pace, error correction, contingencies suitable for the student's proficiency level (i.e., accuracy versus fluency), and appropriate use of modeling, prompting, and fading.

Since the Lentz and Shapiro ([32]) article appeared, a number of publications have used a functional assessment framework for organizing and describing intervention strategies for improving instruction. For example, Howell and Nolet ([28]) described perhaps the most complex model for generating intervention strategies. Referred to as curriculum‐based evaluation, the model is grounded in an analysis of instruction, curriculum, educational environment, and the learner, and a wide variety of strategies is recommended across all of the major curricular domains. Shapiro's (2004) book on academic skills problems has become a staple for school psychologists who consult with teachers in this area. The reader is referred to these textbooks as useful resources for more detailed explanations of the types of strategies that appear in this literature.

A critical aspect of identifying appropriate instructional changes is examining the student's proficiency and motivational levels. For example, Daly, Lentz, and Boyer (1996) showed the utility of the Instructional Hierarchy as a heuristic tool in selecting reading interventions according to a student's level of proficiency with the task. Students should progress toward accuracy, fluency, and then generalization with effective instruction. The astute teacher varies instructional methods according to whether the student is learning to respond accurately, fluently, or to a diversity of examples (i.e., generalized responding). Daly, Witt, Martens, and Dool (1997) took the model a step further and described five possible reasons for a lack of progress, giving explicit strategies aligned with each reason. Their model drew from instructional and motivational strategies that have been used effectively in the academic intervention literature. More recently, Daly, Martens, Barnett, Witt, and Olson (2007) reviewed strategies that can be readily added to existing instruction to strengthen it without significantly increasing demands on the teacher. Specifically, they examined how to evaluate instructional materials for appropriateness, how to use sequentially matched materials, how to design "productive practice time," and how to change reinforcement contingencies over time to promote acquisition, fluency, maintenance, and generalization.

The literature on functional assessment of academic performance has offered a number of helpful strategies and outlined when they may be most appropriate, even describing types of assessment that can be conducted. However, the common framework and multiple intervention strategies of how to change instruction according to the functional assessment literature have largely focused on the development of individualized interventions and have not been directly applied to RtI. Yet, creating a fluid and flexible continuum of services through system‐level implementation necessarily imposes organizational constraints on how individual students' needs are met within the system. Therefore, a practical model for modifying instruction within tiers, drawing from a solid research base (i.e., the functional assessment literature) would have great utility for educators. Based on intervention strategies drawn from the functional assessment literature and our own work with schools in helping them to develop and implement RtI models as a part of the Nebraska RtI Implementation and Support Team, we have devised a process for examining different aspects of current instruction as a basis for recommending modifications within tiers. The rest of the article outlines that process and actual modifications that can be undertaken.

MODIFYING INSTRUCTION

The strategies are presented within a two‐step process of first investigating the current content, instruction, or motivational strategy and then adjusting it by making a change that creates a better alignment between the student's current skill and motivational level and the instruction. The teacher and/or consultant can begin by investigating the five areas outlined in Table 1. The areas to check are prioritized in a sequential order. First, one should examine the appropriateness of the skills targeted for instruction. Next, the instruction itself should be examined first for sequence and then for implementation. Teachers should be following a process of introducing new skills with guided practice and then having students practice newly learned skills independently. Subsequently, one should check to see whether instruction is being followed and delivered according to a plan. Finally, if none of those areas turns up possible modifications or in addition to instructional modifications, we recommend that you examine the child's motivation for engaging in and completing instructional tasks if academic engagement and work completion are likely to be low. Each area will be discussed in turn, with recommendations about how modifications can be made in these areas.

1 Checking Current Instruction for Areas Where Modifications Can Be Made

Check the Skills Targeted for Instruction:•   Does the child have the necessary prerequisite skills?
•   Are instructional tasks sequenced to assure a logical and empirically supported progression of skill sequences?
•   Is the student appropriately placed in the skill sequence?
Check for Guided Practice:•   Are instructions clear, and do they explain (a) how to do the task and (b) what will happen when the assignment is completed?
•   Is the model‐lead‐test method used to assure accurate responding?
•   Is the pace appropriate to ensure that students answer correctly and frequently?
•   Are students' errors corrected with immediate feedback, modeling of the correct response, and correct practice?
Check for Independent Practice:•   Is the difficulty level of assignments appropriate?
•   Is the level of complexity of the assignment appropriate?
•   Are there acceptable criteria for performance with feedback to the student?
•   Is the teacher checking the student's accuracy and providing feedback as he or she practices?
•   Is sufficient time devoted to fluency building (i.e., productive practice time)?
Check Implementation Fidelity:•   Is there a clearly organized plan for instruction, and is it being done correctly?
•   Is the dosage of instruction sufficient?
 ○   Are enough sessions scheduled and occurring?
 ○   Is the instructional session sufficiently long?
•   Are the essential components of the intervention being delivered as planned?
Check the Motivating Conditions:•   Is there positive feedback for academic engagement during instruction?
•   Are positive consequences for completing work available following instruction?
•   Have the positive consequences been validated (i.e., shown to improve performance)?
•   Is the schedule for positive consequences appropriate?
•   Are competing contingencies (e.g., access to distracting materials) eliminated or controlled?

Check the Skills Targeted for Instruction

Academic skill instruction is most effective when it is systematic and explicit (National Institute of Child Health and Human Development, [39]; Carnine, Silbert, Kame'enui, & Tarver, 2006). Systematic instruction often refers to what is being taught and explicit instruction refers to how it is being taught. Which skills are targeted for instruction should be guided by a specific sequence, introducing skills in the order most appropriate for learning (Howell & Evans, [27]). The skill sequence should be based on an analysis that examines all component skills that require mastery before mastery of the broader skill can be attained (Howell & Nolet, [28]). For example, before a student can read the word bike, he or she must (a) know the letter sounds (i.e., /b/, /î/, /k/), (b) know the rule that the vowel in the consonant‐vowel‐consonant‐final‐e (CVCe) word says its name, and (c) know the strategy of blending individual sounds together to say a word (i.e., /bike/). Additionally, the student will likely experience more success with gaining the skill of word reading if he or she has acquired phonological awareness, which involves learning how to break words into individual sounds (i.e., phonemes) and how letters correspond to sounds (Carnine, Silbert, Kame'enui, & Tarver, [3]). Identifying and following a logical progression of skills is critical because when a broader skill requires prerequisite or component skills a student has not mastered, instruction in the broader skill will be fruitless. Additionally, if that broader skill becomes a prerequisite to learning a later skill or task (e.g., adding affixes to CVC‐e words to create multisyllabic words, such as "biked" or "biking"), the student may fall even farther behind.

Therefore, when a student is not making expected progress based on examination of data, the first question a teacher should ask is whether the presentation of skills follows a logical and empirically supported progression and includes teaching of component skills before introducing the broader skill. To examine the skill sequence, teachers should begin by looking to an empirically based, preexisting skill sequence. Several resources that provide sample skill sequences and rules for sequencing skill instruction in the area of reading include Direct Instruction Reading (5th ed.; Carnine et al., [3]) and Teaching Struggling and At‐Risk Readers: A Direct Instruction Approach (Carnine et al., [4]). In Designing Effective Mathematics Instruction: A Direct Instruction Approach (4th ed.), Stein, Kinder, Silbert, and Carnine (2005) provide sample skill sequences and rules for sequencing math skills, and in Curriculum‐Based Evaluation: Teaching and Decision Making, Howell and Nolet ([28]) provide sample analyses of critical tasks for reading and math skills.

After ensuring that the sequence of skill presentation follows a logical progression, teachers need to ensure that the student is placed appropriately for instruction within the skill sequence. This requires teachers to be able to identify specific skills that the student has not mastered within the sequence and to provide instruction for those skills. Informal diagnostic tools can be useful to assess various basic skills. If the teacher is using an instructional program that includes in‐program assessments, these may be used to determine student mastery of skills that have been taught. Teachers may administer the mastery assessments in sequential order until reaching content the student has not mastered and begin instruction at this point.

If in‐program assessments are not available, teachers may use informal diagnostic tools, such as the Consortium on Reading Excellence (CORE) Phonics Survey (Honig, Diamond, & Gutlohn, [26]), the Phonological Awareness Screener for Intervention (PASI) (95 Percent Group, 2010a), and the Phonics Screener for Intervention (PSI)(95 Percent Group, [2]) in the area of reading. Once the teacher has identified specific skills with which the student is struggling, those skills should be targeted for instruction. Occasionally, a student may not be making sufficient progress because the focus of instruction is on skills that have already been mastered and the teacher needs to "jump ahead" in the skill sequence to provide instruction for skills the student has not yet mastered. This occurs most frequently when a teacher begins intervention with the first lesson of a program and delivers each lesson in sequence. Use of in‐program assessments or diagnostic tools can assist with this concern, as well by providing information to more accurately place a student within the identified skill sequence.

Check for Guided Practice

More than any others, students who enter school with limited background knowledge and those who struggle to develop skills at the same rate as peers (which may be one and the same student) need systematic and explicit instruction (Foorman & Torgesen, [17]). Explicit instruction is interactive teaching that evokes high rates of accurate responding in clearly specified skills. Therefore, interactive teaching requires that the teacher design instructional sessions to maximize student responding and check to be sure it is being achieved. The teacher can accomplish this objective only if he or she adjusts instruction to the student's current proficiency level over time. This is generally done by sequencing instructional sessions to help students progress from no or little responding (a skill deficit) to accurate responding, then fluency, and then generalization (Haring, Lovitt, Eaton, & Hansen, [24]).

Given that the student is not progressing in the current scenario, the teacher must first build accuracy and then fluency (Daly et al., [7]). Guided practice, which includes explicit instructions for correctly completing the instructional task, modeling, supervised practice with immediate feedback for each response, error correction, and amount of supervised practice adjusted according to student accuracy, is the first activity teachers should carry out. When accuracy is strengthened such that errors are low and the student responds reliably with correct answers, the teacher should then initiate independent practice to build fluency. Students should not be practicing independently (i.e., with minimal supervision) until the teacher can be sure that they will not be practicing errors. Therefore, when problem solving has revealed that the correct skills have been targeted, one should examine whether instruction is being sequenced accordingly for the skill being taught. Guided practice and independent practice will each be discussed in turn, as this is the natural order of progression of instruction.

The teacher should introduce new skills using guided practice. Student errors during the introduction of new skills may be the most important diagnostic indicator of the effectiveness of instruction. If the student is making errors, the teacher or consultant should check each component of guided practice. Directions and instructions should be checked for clarity and thoroughness. Student errors may reveal a lack of good modeling and error correction. Modeling of skills and error correction should follow a model‐lead‐test approach (Carnine et al., [3]). This approach provides opportunities for students to practice the target skill correctly and, thereby, increases the likelihood students will respond accurately to the next presentation of an instructional item. For example, a teacher introducing the letter sound for "s" might say, "This is the letter 's'; it makes the /sss/ sound" [teacher model]. What sound? /sss/ [lead; students and teacher respond in unison]. Good, the sound is /sss/ [affirmation of correct response]. What sound? [test; students respond independently]." If students do not respond correctly during the test stage, the teacher should repeat the process, thus providing more opportunities for correct practice.

Next, the teacher should provide supervised practice with immediate feedback for every response, which raises the question, "How much practice should be given?" In the research literature, practice has been conceptualized as opportunities to respond (OTR), which is one of the most critical contributors to skill development (Greenwood, [22]). Increasing OTR for students who struggle with reading is an essential ingredient in making instruction more explicit, as these students typically receive fewer opportunities to practice reading both within and outside of school and thus fall increasingly further behind peers (Cunningham & Stanovich, [6]). Many of the OTR should occur during independent practice. However, the teacher should not have the student engage in independent practice until he or she is responding with few errors and little to no assistance. Thus, teachers must monitor accuracy and provide immediate positive and corrective feedback to promote rapid skill acquisition (Skinner, Fletcher, & Henington, [49]). Affirming student answers provides students with positive feedback and an additional model of the target skill (e.g., "Yes, /sss/"). Correcting errors immediately increases correct practice and decreases student guessing. Checking for all the elements of guided practice is the most important step in assuring that the teacher is systematically building skill acquisition.

Check for Independent Practice

Following guided practice, students should engage in independent practice to increase skill fluency. The amount of time spent on fluency‐building activities within curricula and intervention programs is not usually sufficient to produce mastery (Chard, Vaughn, & Tyler, [5]). In fact, Edmonds and Briggs ([15]) found that only 3% of the time during first‐grade classroom reading instruction was spent on fluency activities. Additional opportunities for independent practice can be built into lessons to promote fluency, generalization, and maintenance. Independent practice assignments should focus on skills students demonstrate with low fluency, but high accuracy (i.e., above 93%), so that the students are likely to engage in correct practice (Treptow, Burns, & McComas, [51]). Furthermore, the complexity of the assignment should be carefully matched to students' skill level, as core academic skills (e.g., reading and math) are composed of multiple component skills that increase in complexity as the student progresses in the curriculum. For example, students must first learn how to add before they are given a story‐problem assignment focused on the application of addition skills. In reading, students must know the individual sounds in a word before they are given an assignment to read a word by blending the sounds together. If the difficulty level of the assignment is too high (i.e., less than 93% accuracy), students are less likely to be engaged and more likely to guess answers and make errors (Treptow et al., [51]).

Similar to guided practice, the teacher should provide clear instructions, check students' accuracy, and provide performance feedback (Chard et al., [5]). Whereas feedback is provided for every response during skill acquisition, feedback can be provided less frequently during independent practice (Haring et al., [24]). Feedback may be provided by graphing performance in relation to a predetermined goal or teaching students to self‐monitor performance, which may improve motivation as well (Eckert, Dunn, & Ardoin, [14]). Although independent practice is less teacher directed, teachers should monitor the accuracy of the responses and use the data to determine the appropriateness of the task and amount of practice needed.

Measuring the cumulative amount of independent practice time to mastery of component skills is useful for informing the amount of practice an individual student may need to master similar skills. Daly et al. ([8]) proposed measuring productive practice time, or the smallest measureable unit of practice. For example, the number of minutes of oral reading, writing, and math computation practice can be measured. In addition to giving individual independent practice assignments, productive practice time during intervention sessions can be increased through paired practice with peers. Within paired practice, students can also be taught to provide one another with performance feedback. For example, Dufrene et al. ([12]) taught peer tutors to deliver a repeated‐reading package that included performance feedback and reward for effort. The tutors were able to implement the package with moderate to high levels of fidelity, and students demonstrated improvements in oral reading fluency following implementation of the tutoring procedure. Similarly, Menesses and Gresham ([36]) found that students who received peer tutoring using a time‐delay procedure demonstrated improvements in basic mathematics skills compared with a control group. Neglecting fluency building will likely have negative effects on students' ability to generalize newly learned skills to more complex skills (Daly et al., [7]; Haring et al., [24]).Thus, this component of instruction should be viewed as indispensable and therefore carefully checked when student progress is limited.

Check Implementation Fidelity

High implementation fidelity is necessary for making valid and informed decisions about student progress (Hagermoser‐Sanetti & Kratochwill, 2009; National Association of State Directors of Special Education, [38]). Specifically, one cannot attribute student progress or lack of progress to the intervention if it was not provided consistently and frequently enough or was not implemented as planned. Once again, within an RtI framework, although it is the student who is being assessed with repeated measures over time, it is the instruction that is being diagnosed and not the student per se. Therefore, before concluding that a student needs a change of placement based on a lack of progress, evidence for implementation fidelity should be sought. Given that the amount of instructional time is a strong predictor of learning (Gettinger & Siebert, 2002), students must receive an appropriate dosage of instruction (i.e., frequency and length of sessions) to benefit. However, even when a sufficient amount of the intervention is provided, adherence(often measured as the percentage of essential components of the intervention completed correctly) must be examined (Vadasy, Jenkins, Antil, Wayne, & O'Connor, [52]). Unfortunately, interventions in school settings are often implemented with low levels of fidelity in the absence of ongoing performance feedback (Noell, 2008). Therefore, fidelity should be monitored regularly, and support should be given to help teachers maintain high implementation fidelity.

Before one can assess fidelity, one must have a clear understanding of the instructional plan. Prior to implementation, a written intervention plan should be developed and include a step‐by‐step guide, the skills targeted, the frequency and length of sessions, and who is responsible for implementation. The written plan can then be used to determine whether the instruction was delivered as intended. As adherence and dosage influence student progress, both should be measured throughout implementation using practical methods. A checklist of essential program components is the most common and objective method of measuring adherence (Schulte, Easton, & Parker, [46]). If a checklist is not included in the selected prepackaged instructional program, schools can create a simple checklist by making a sequential list of steps to be followed during the lessons, including other essential components for implementation (e.g., materials prepared in advance, use of signals for responding, how to respond to a student question, etc.). Given the limited resources in schools, it is most practical to have teachers self‐report adherence daily (Plavnick, Ferreri, & Maupin, [43]) and have an outside observer who is knowledgeable in the intervention program (e.g., school psychologist, peer) observe fidelity periodically, with more frequent observations when fidelity is of concern (DiGennaro, Martens, & McIntyre, [11]). The dosage of intervention provided can be easily tracked by having the instructor complete a log that includes the date, student attendance, and start and end times of the intervention session. Data from the checklists and log can be compared with the level of implementation prescribed in the written intervention plan to determine whether modifications or further teacher supports are needed (Dusenbury, Brannigan, Falco, & Hansen, [13]).

When data show a lack of student progress and low implementation fidelity, several steps can be taken to improve adherence and increase the amount of intervention provided. If adherence is low, general discussions of implementation may improve some teachers' adherence, whereas others require more regular, structured, and explicit feedback based on fidelity and student data (Noell, 2008). Structured performance feedback typically involves providing the teacher with a graph of the percentage of components implemented, a review of the components missed or completed incorrectly, and problem solving to improve future implementation. When problem solving for future implementation, consideration should be given to why fidelity is poor so that appropriate supports can be provided. For example, the reason a teacher consistently misses a component essential to the intervention may range from an increase in inappropriate student behavior during the activity to failure to prepare additional materials (e.g., copies, other objects that must be prepared in advance and require additional teacher time) to lack of skill or knowledge regarding how to implement one or more steps. The supports provided to teachers should be individualized to match their needs.

Ongoing coaching, in addition to professional development, is particularly useful for improving instructional practices (Neuman & Wright, [40]). Rather than just focusing on specific areas of the instructional plan (i.e., steps not accomplished), coaching might also focus on reading development, effective instructional practices, specific program implementation, and/or behavior management (Walpole, McKenna, Uribe‐Zarain, & Lamitina, 2010). Enhancing teachers' knowledge of reading development and instruction should be an underlying goal of professional development because teachers with a strong knowledge base in reading development and the English language have been shown to produce better student outcomes. Specifically, student outcomes are improved because the teachers are better able to provide accurate examples, appropriately correct student errors, and modify instruction to meet student needs (Piasta et al., [42]). The coaching process may include setting goals with teachers, observing instruction, providing feedback, modeling lessons or teaching strategies, and use of data to inform instruction (Neuman & Wright, [40]). Providing ongoing coaching and performance feedback to support implementation of the intervention is likely to improve fidelity and thereby student progress.

If the frequency of instructional sessions appears to be appropriate, but the student is often absent, it is recommended that schools collaborate with families to identify barriers to attendance and develop a plan to improve attendance. If, however, the intervention is being provided at the planned dosage, but the student is not making progress, the number or length of sessions may need to be increased. Given the time constraints and amount of content teachers are expected to cover in several academic subjects, it can be difficult to arrange for students to receive more instruction, but quickly closing the gap for struggling students is necessary for their academic success (e.g., Francis, Shaywitz, Stuebing, Shaywitz, & Fletcher, [18]). In general, it is better to provide students with more instruction in a brief time period than to provide fewer sessions across a longer period (Wanzek & Vaughn, [56]).

The length of the intervention sessions should be consistent with recommended levels for research‐based programs, but may be delivered more for longer periods, if indicated. Yet, findings differ across studies regarding increasing the length of intervention sessions. Wanzek and Vaughn ([56]) found that first‐grade students receiving a single‐ or double‐dose of an intervention (i.e., 60 versus 30 min) outperformed students in a control condition, but that there was no significant difference between the intervention groups. Harn, Linan‐Thompson, and Roberts (2008), however, found that students receiving 60 minutes per day had higher rates of progress than those receiving 30 minutes per day. Two differences between the studies seem noteworthy and may have implications for practice. First, providing more of the same intervention (e.g., a double dose, as in the Wanzek & Vaughn, [56], study) may not be beneficial if the intervention does not meet student needs or provide sufficient productive practice time (the Harn et al., [25], study increased time spent on fluency building from 17% to 33% of the intervention session). Second, if student disengagement and inappropriate behavior increase during longer instructional periods (which may have been the case in the Wanzek & Vaughn, [56], study), students are less likely to learn the material presented. Therefore, providing increased instructional time may be beneficial, but the instruction must be matched to the students' needs and build skill fluency, and student engagement must remain high.

Check the Motivating Conditions

When the content and delivery of instruction have been carefully examined and changed as necessary, the teacher is advised to arrange motivating consequences to increase the student's engagement during instruction and for task completion (Lentz, [29]). Positive consequences for task engagement and work completion will strengthen academic skills and reduce the aversiveness of difficult tasks, making it a very powerful tool in the teacher's repertoire. The most obvious and simplest form of positive consequence for behavior is social attention. Instructional time should be examined to determine whether the teacher is providing high rates of positive social attention for student behavior (Martens & Kelly, [34]).

Positive social attention contingent on appropriate behavior is essential to effective instruction, but may not be enough. Lentz (1998b) recommends that one test for a skill versus a performance deficit using other potentially effective rewards. (Methods are described by Daly et al., [10]). Very good methods for identifying potential rewards have been developed and are easily used in practice (Resetar & Noell, [44]; Schanding, G. T., Tingstrom, D. H., & Sterling‐Turner, 2009). For example, Daly, Wells, Swanger‐Gagne, Carr, Kunz, and Taylor (2009) used simple activities identified as reinforcers based on a stimulus‐preference assessment method (i.e., multiple‐stimulus without replacement) to improve math computation rates of students with behavioral disorders during independent seatwork. The stimulus‐preference assessment is conducted in three sessions (to account for possible fluctuations in motivational levels), with sessions taking between 5 and 10 minutes. Combining contingent positive social attention with contingent access to preferred (and simple‐to‐deliver) activities following work completion can be expected to strongly influence student behavior during instruction, increasing students' responsiveness to high‐quality instruction.

Placing this recommendation last may appear to contradict a previous recommendation by the second author (Daly et al., [10]) to examine motivation first by doing a performance deficit analysis. Although we fully recognize that altering consequences is relatively easy to do when compared with changing instruction for an individual case (the context of the Daly et al., [10], article) and that it may be the most efficient thing to do for case‐based services, the systemic nature of RtI has caused us to put this recommendation last in the current context. Because the primary concern in any RtI model is to strengthen the overall integrity of the model so that it serves all students well, we recommend examining curriculum and instructional practices first (the first four recommendations). Quite simply, more students will benefit from these alterations at a systemic level. Simply changing the contingencies for a particular student may mask deeper problems with the instruction that is being delivered to all students in a classroom or in a particular tier. Furthermore, although we have placed this last in the process, this step is not a mere afterthought; rather, it should be routinely examined. Therefore, whether one chooses to do it early or late hinges on whether the priority is to change factors that are more likely to have a systemic effect (i.e., curriculum and instruction) or to change factors for a particular child who may be having difficulty.

CONCLUSION

Providing quality, evidence‐based instruction matched to student needs is a key feature of RtI. However, when students do not make progress within a tier of instruction, teachers may be unsure how to modify instruction to better meet student needs. This article outlined a two‐step process of first examining content, instruction, and motivation and then making simple modifications to enhance students' response to instruction within a tier. Along with an examination of student data (e.g., progress monitoring, mastery assessments, diagnostic assessment, engagement, attendance, etc.), teachers and consultants can use the questions in Table 1 as a guide for diagnosing the effectiveness of the instruction and investigating ways to modify it to increase students' rate of growth with basic academic skills.

REFERENCES 1 95 Percent Group. (2010a). Phonological Awareness Screener for Intervention (PASI). Lincolnshire, IL : 95 Percent Group. 2 95 Percent Group. (2010b). Phonics Screener for Intervention (PSI). Lincolnshire, IL : 95 Percent Group. 3 Carnine, D. W., Silbert, J., Kame'enui, E., & Tarver, S. G. (2010). Direct instruction reading (5th ed.). New York : Merrill. 4 Carnine, D. W., Silbert, J., Kame'enui, E. J., Tarver, S. G., & Jungjohann, K. (2006). Teaching struggling and at‐risk readers: A direct instruction approach. Upper Saddle River, NJ : Pearson Prentice Hall. 5 Chard, D. J., Vaughn, S., & Tyler, B. (2002). A synthesis of research on effective interventions for building oral reading fluency with elementary students with learning disabilities. Journal of Learning Disabilities, 35, 386 – 406. doi:10.1177/00222194020350050101 6 Cunningham, A., & Stanovich, K. (1998). What reading does for the mind. American Educator, 22, 8 – 15. 7 Daly, E. J., III, Lentz, F. E., & Boyer, J. (1996). The instructional hierarchy: A conceptual model for understanding the effective components of reading interventions. School Psychology Quarterly, 11, 369 – 386. 8 Daly, E. J., III, Martens, B. K., Barnett, D., Witt, J. C., & Olson, S. C. (2007). Varying intervention delivery in response‐to‐intervention: Confronting and resolving challenges with measurement, instruction, and intensity. School Psychology Review, 36, 562 – 581. 9 Daly, E. J., III, Wells, N. J., Swanger‐Gagne, M. S., Carr, J. E., Kunz, G. M., & Taylor, A. M. (2009). Evaluation of the multiple‐stimulus without replacement preference assessment method using activities as stimuli. Journal of Applied Behavior Analysis, 42, 563 – 574. Daly, E. J., III, Witt, J. C., Martens, B. K., & Dool, E. J. (1997). A model for conducting a functional analysis of academic performance problems. School Psychology Review, 26, 554 – 574. DiGennaro, F. D., Martens, B. K., & McIntyre, L. L. (2005). Increasing treatment integrity through negative reinforcement: Effects on teacher and student behavior. School Psychology Review, 34 (2), 220 – 231. Dufrene, B. A., Reisener, C. D., Olmi, D. J., Zoder Martell, K., McNutt, M. R., & Horn, D. R. (2010). Peer tutoring for reading fluency as a feasible and effective alternative in response to intervention systems. Journal of Behavioral Education, 19, 239 – 256. doi: 10.1007/s10864‐010‐9111 Dusenbury, L., Brannigan, R., Falco, M., & Hansen, W. B. (2003). A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research, 18, 237 – 256. doi:10.1093/her/18.2.237 Eckert, T. L., Dunn, E. K., & Ardoin, S. P. (2006). The effects of alternate forms of performance feedback on elementary‐aged students' oral reading fluency. Journal of Behavioral Education, 15 149 – 162. doi:10.1007/s10864‐006‐9018‐6 Edmonds, M., & Briggs, K. L. (2003). The instructional content emphasis instrument: Observations of reading instruction. In S. Vaughn & K. Briggs (Eds.), Reading in the classroom: Systems for the observation of teaching and learning. Baltimore : Paul H. Brookes Publishing. Englemann, S., Granzin, A., & Severson, H. (1979). Diagnosing instruction. The Journal of Special Education, 13, 355 – 363. Foorman, B. R., & Torgesen, J. (2010). Critical elements of classroom and small‐group instruction promote reading success in all children. Learning Disabilities Research & Practice, 16 203 – 212. doi: 10.1111/0938‐8982.00020 Francis, D. J., Shaywitz, S. E., Stuebing, K. K., Shaywitz, B. A., & Fletcher, J. M. (1996). Developmental lag versus deficit models of reading disability: A longitudinal, individual growth curves analysis. Journal of Educational Psychology, 88, 3 – 17. Fuchs, D., Fuchs, L. S., & Compton, D. L. (2004). Identifying reading disabilities by responsiveness‐to‐instruction: Specifying measures and criteria. Learning Disability Quarterly, 27, 216 – 227. Gersten, R., & Dimino, J. (2006). RTI (response to intervention): Rethinking special education for students with reading difficulties (yet again). Reading Research Quarterly, 41 99 – 108. doi:10.1598/RRQ.41.1.5 Gettinger, M., & Seibert, J. K. (2002). Best practices in increasing academic learning time. In A. Thomas (Ed.), Best Practices in School Psychology IV: Vol. 1 (4th ed., pp. 773 – 787). Bethesda, MD : National Association of School Psychologists. Greenwood, C. R. (1996). The case for performance‐based instructional models. School Psychology Quarterly, 11, 283 – 296. Hagermoser‐Sanetti, L. M., & Kratochwill, T. R. (2009). Treatment integrity assessment in the schools: An evaluation of the Treatment Integrity Planning Protocol (TIPP). School Psychology Quarterly, 24, 24 – 35. Haring, N., Lovitt, T., Eaton, M., & Hansen, C. (1978). The fourth R: Research in the classroom. Columbus, OH : Merrill. Harn, B. A., Linan‐ Thompson, S., & Roberts, G. (2008). Intensifying instruction: Does additional instructional time make a difference for the most at‐risk first graders ? Journal of Learning Disabilities, 41, 115 – 125. doi: 10.1177/0022219407313586 Honig, B., Diamond, L. & Gutlohn, L. (2008). Teaching reading sourcebook (2nd ed.). Novato, CA : Arena Press. Howell, K. W., & Evans, D. G. (1995). A comment on "must instructionally useful performance assessment be based in the curriculum?" Exceptional Children, 61, 394 – 296. Howell, K. W., & Nolet, V. (2000). Curriculum‐based evaluation: Teaching and decision making (3rd ed.). Belmont, CA : Wadsworth. Lentz, F. E. (1988a). On‐task behavior, academic performance, and classroom disruptions: Untangling the target selection problem in classroom interventions. School Psychology Review, 17, 243 – 257. Lentz, F. E. (1988b) Effective reading interventions in the regular classroom. In J. L. Graden, J. Zins, & M. J. Curtis (Eds.), Alternative educational delivery systems: Enhancing instructional options for all students (pp. 351 – 370). Washington, DC : The National Association of School Psychologists. Lentz, F. E., Allen, S. J., & Ehrhardt, K. E. (1996). The conceptual elements of strong interventions in school settings. School Psychology Quarterly, 11, 118 – 136. Lentz, F. E., & Shapiro, E. S. (1986). Functional assessment of the academic environment. School Psychology Review, 15, 346 – 357. Marston, D. (2005). Tiers of intervention in responsiveness to intervention: Prevention outcomes and learning disabilities identification patterns. Journal of Learning Disabilities, 38, 539 – 544. Martens, B. K., & Kelly, S. Q. (1993). A behavioral analysis of effective teaching. School Psychology Quarterly, 8, 10 – 26. McMaster, K. L., Fuchs, D., Fuchs, L. S., & Compton, D. L. (2005). Responding to nonresponders: An experimental field trial of identification and intervention methods. Exceptional Children, 77, 445 – 463. Menesses, K. F., & Gresham, F. M. (2009). Relative efficacy of reciprocal and nonreciprocal peer tutoring for students at‐risk for academic failure. School Psychology Quarterly, 24 266 – 275. doi: 10.1037/a0018174 Myers, D. M., Simonsen, B., & Sugai, G. (2011). Increasing teachers' use of praise with a response‐to‐intervention approach. Education and Treatment of Children, 34, 35 – 59. National Association of State Directors of Special Education. (2008). Response to intervention blueprints for implementation. Retrieved December 15, 2008, from http://www.nasdse.org/Portals/0/SCHOOL.pdf National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel. Teaching children to read: An evidence‐based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups (NIH Publication No. 00‐4754). Washington, DC: US Government Printing Office. Neuman, S. B., & Wright, T. S. (2010). Promoting language and literacy development for early childhood educators: A mixed‐methods study of coursework and coaching. The Elementary School Journal, 111, 63 – 86. Noell, G. H. (2008). Research examining the relationships among consultation process, treatment integrity, and outcomes. In W. P. Erchul & S. M. Sheridan (Eds.), Handbook of research in school consultation (pp. 323 – 341). New York : Lawrence Erlbaum Associates. Piasta, S. B., McDonald Connor, C., Fishman, B. J., & Morrison, F. J. (2009). Teachers' knowledge of literacy concepts, classroom practices, and student reading growth. Scientific Studies of Reading, 13, 224 – 248. Plavnick, J. B., Ferreri, S. J., & Maupin, A. N. (2010). The effects of self‐monitoring on the procedural integrity of a behavioral intervention for young children with developmental disabilities. Journal of Applied Behavior Analysis, 43, 315 – 320. Resetar, J. L., & Noell, G. H. (2008). Evaluating preference assessments for use in the general education population. Journal of Applied Behavior Analysis, 41 447 – 451. doi: 10.1901/jaba.2008.41‐447 Schanding, G. T., Tingstrom, D. H., & Sterling‐Turner, H. E. (2009). Evaluation of stimulus preference assessment methods with general education students. Psychology in the Schools, 46 89 – 99. doi: 10.1002/pits.20356 Schulte, A. C., Easton, J. E., & Parker, J. (2009). Advances in treatment integrity research: Multidisciplinary perspectives on the conceptualization, measurement, and enhancement of treatment integrity. School Psychology Review, 38, 460 – 475. Shapiro (2004). Academic skills problems: Direct assessment and intervention (3rd ed.). New York : Guilford. Shinn, M. R. (2007). Identifying students at risk, monitoring performance, and determining eligibility within response to intervention: Research on educational need and benefit from academic intervention. School Psychology Review, 36, 601 – 617. Skinner, C. H., Fletcher, P. A., & Henington, C. (1996). Increasing learning rates by increasing student response rates: A summary of research. School Psychology Quarterly, 11 313 – 325. doi:10.1037/h0088937 Stein, M., Kinder, D., Silbert, J., & Carnine, D. W. (2005). Designing effective mathematics instruction: A direct instruction approach (4th ed.). Upper Saddle River, NJ : Prentice Hall Treptow, M. A., Burns, M. K., & McComas, J. J. (2007). Reading at the frustration, instructional, and independent levels: Effects on student time on task and comprehension. School Psychology Review, 36, 159 – 166. Vadasy, P. F., Jenkins, J. R., Antil, L. R., Wayne, S. K., & O'Connor, R. E. (1997). The effectiveness of one‐to‐one tutoring by community tutors for at‐risk beginning readers. Learning Disability Quarterly, 20, 126 – 139. Vaughn, S., Cirino, P. T., Wanzek, J., Fletcher, J. M., Denton, C. D., Barth, A., et al. (2010). Response to intervention for middle school students with reading difficulties: Effects of a primary and secondary intervention. School Psychology Review, 39, 3 – 21. Vaughn, S., Linan‐ Thompson, S., & Hickman, P. (2003). Response to instruction as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391 – 409. Walpole, S., McKenna, M. C., Uribe‐ Zarain, X., & Lamitina, D. (2010). The relationship between coaching and instruction in the primary grades: Evidence from high poverty schools. The Elementary School Journal, 111, 115 – 140. Wanzek, J., & Vaughn, S. (2008). Response to varying amounts of time in reading intervention for students with low response to intervention. Journal of Learning Disabilities, 41 126 – 142. doi: 10.1177/0022219407313426

By Sara Kupzyk; Edward J. Daly; Tanya Ihlo and Nicholas D. Young

Reported by Author; Author; Author; Author

Titel:
MODIFYING INSTRUCTION WITHIN TIERS IN MULTITIERED INTERVENTION PROGRAMS
Autor/in / Beteiligte Person: KUPZYK, Sara ; DALY, Edward J ; IHLO, Tanya ; YOUNG, Nicholas D ; JONES, Ruth E ; BALL, Carrie R
Link:
Zeitschrift: Addressing Response to Intervention Implementation: Questions from fhe Field, Jg. 49 (2012), Heft 3, S. 219-230
Veröffentlichung: Hoboken, NJ: Wiley, 2012
Medientyp: academicJournal
Umfang: print; 12; 1 p.3/4
ISSN: 0033-3085 (print)
Schlagwort:
  • Amérique du Nord
  • Amérique
  • Etats-Unis
  • Homme
  • Human
  • Hombre
  • Santé publique
  • Public health
  • Salud pública
  • Echec scolaire
  • School failure
  • Fracaso escolar
  • Enfant
  • Child
  • Niño
  • Implémentation
  • Implementation
  • Implementación
  • Milieu scolaire
  • School environment
  • Medio escolar
  • Performance
  • Rendimiento
  • Programme enseignement
  • Educational program
  • Programa enseñanza
  • Programme sanitaire
  • Sanitary program
  • Programa sanitario
  • Prévention
  • Prevention
  • Prevención
  • Santé mentale
  • Mental health
  • Salud mental
  • Stratégie
  • Strategy
  • Estrategia
  • Trouble de l'apprentissage
  • Learning disability
  • Trastorno aprendizaje
  • Sciences biologiques et medicales
  • Biological and medical sciences
  • Sciences biologiques fondamentales et appliquees. Psychologie
  • Fundamental and applied biological sciences. Psychology
  • Psychologie. Psychophysiologie
  • Psychology. Psychophysiology
  • Psychologie de l'éducation
  • Educational psychology
  • Elève et étudiant. Réussite et échec scolaire
  • Pupil and student. Academic achievement and failure
  • Sciences medicales
  • Medical sciences
  • Psychopathologie. Psychiatrie
  • Psychopathology. Psychiatry
  • Psychiatrie sociale. Ethnopsychiatrie
  • Social psychiatry. Ethnopsychiatry
  • Prévention. Politique sanitaire. Planification
  • Prevention. Health policy. Planification
  • Psychologie. Psychanalyse. Psychiatrie
  • Psychology. Psychoanalysis. Psychiatry
  • PSYCHOPATHOLOGIE. PSYCHIATRIE
  • Cognition
  • Psychology, psychopathology, psychiatry
  • Psychologie, psychopathologie, psychiatrie
  • Subject Geographic: Amérique du Nord Amérique Etats-Unis
Sonstiges:
  • Nachgewiesen in: FRANCIS Archive
  • Sprachen: English
  • Original Material: INIST-CNRS
  • Document Type: Article
  • File Description: text
  • Language: English
  • Author Affiliations: University of Nebraska-Lincoln, United States ; Department of Special Education, Ball State University, Teachers College, Room 720, Muncie, IN 47306, United States ; Indiana State University, United States
  • Rights: Copyright 2015 INIST-CNRS ; CC BY 4.0 ; Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS

Klicken Sie ein Format an und speichern Sie dann die Daten oder geben Sie eine Empfänger-Adresse ein und lassen Sie sich per Email zusenden.

oder
oder

Wählen Sie das für Sie passende Zitationsformat und kopieren Sie es dann in die Zwischenablage, lassen es sich per Mail zusenden oder speichern es als PDF-Datei.

oder
oder

Bitte prüfen Sie, ob die Zitation formal korrekt ist, bevor Sie sie in einer Arbeit verwenden. Benutzen Sie gegebenenfalls den "Exportieren"-Dialog, wenn Sie ein Literaturverwaltungsprogramm verwenden und die Zitat-Angaben selbst formatieren wollen.

xs 0 - 576
sm 576 - 768
md 768 - 992
lg 992 - 1200
xl 1200 - 1366
xxl 1366 -