1 Introduction

During the 2019–2020 academic year, the Dynamic Learning Maps® (DLM®) Alternate Assessment System offered assessments of student achievement in English Language Arts (ELA), mathematics, and science for students with the most significant cognitive disabilities in grades 3–8 and high school. Due to differences in the development timeline for science, a separate technical manual update was prepared for science (see Dynamic Learning Maps Consortium, 2020b).

The purpose of the DLM system is to improve academic experiences and outcomes for students with the most significant cognitive disabilities by setting high, actionable academic expectations and providing appropriate and effective supports to educators. Results from the DLM alternate assessment are intended to support interpretations about what students know and are able to do and to support inferences about student achievement in the given subject. Results provide information that can guide instructional decisions as well as information for use with state accountability programs.

The DLM Alternate Assessment System is based on the core belief that all students should have access to challenging, grade-level content. Online DLM assessments give students with the most significant cognitive disabilities opportunities to demonstrate what they know in ways that traditional paper-and-pencil, multiple-choice assessments cannot. The DLM alternate assessment provides optional, instructionally embedded testlets that are available for use in day-to-day instruction. A year-end assessment is administered in the spring, and results from that assessment are reported for state accountability purposes and programs. This design is referred to as the year-end model and is one of two models for the DLM Alternate Assessment System. See Assessment section in this chapter for an overview of both models.

A complete technical manual was created after the first operational administration in 2014–2015. After each annual administration, a technical manual update is provided to summarize updated information. The current technical manual provides updates for the 2019–2020 administration, including changes to the assessment model and blueprints to prioritize measuring fewer Essential Elements with more items, and associated impacts on test development and scoring. Only sections with updated information are included in this manual. For a complete description of the DLM assessment system, refer to previous technical manuals, including the 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016b).

1.1 Impact of COVID-19 on the Administration of DLM Assessments

The COVID-19 pandemic significantly impacted administration of the spring 2020 DLM assessment. Beginning in March 2020, in response to the pandemic, many states and local school districts closed in an effort to slow the spread of the virus, as recommended by the Centers for Disease Control and Prevention (CDC; 2020b, 2020a). During school closures, students across the country were unable to complete their spring assessments, including the DLM alternate assessments. As a result, on March 20, 2020, the United States Secretary of Education used her authority under the Elementary and Secondary Education Act of 1965 (ESEA; Elementary and Secondary Education Act of 1965, 1965), as amended by the Every Student Succeeds Act (ESSA; Every Student Succeeds Act, 2015), to invite states to submit one-year waivers of the assessment and accountability requirements, which all 50 states, the District of Columbia, the Commonwealth of Puerto Rico, and the Bureau of Indian Education applied for and received (Recommended Waiver Authority Under Section 3511(d)(4) of Division A of the Coronavirus Aid, Relief, and Economic Security Act (“CARES ACT”), 2020).

Due to the cancellation of spring assessment administration, very few students participating in the DLM assessment completed their assessments as intended. Thus, summative results were not provided in 2019–2020, as the available data were not an accurate and complete reflection of students’ knowledge, skills, and understandings. Instead, limited results were optionally provided to state education agencies in order to inform instructional decisions in the subsequent school year. For a summary of results provided in 2019–2020, see Chapter 7 of this manual. The consortium governance board met with Accessible Teaching, Learning, and Assessment Systems (ATLAS) staff to discuss the level of score reporting that was appropriate and technically defensible. This information was also shared with the Technical Advisory Committee, which supported the approach.

This manual presents evidence for the limited results that were provided in 2019–2020, as well as other administration, test development, and research activities that occurred in 2019–2020 and were unaffected by the COVID-19 pandemic.

1.2 Background

In 2019–2020, DLM assessments were available to students in 20 states and one Bureau of Indian Education school: Alaska, Arkansas, Colorado, Delaware, District of Columbia, Illinois, Iowa, Kansas, Maryland, Missouri, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Oklahoma, Rhode Island, Utah, West Virginia, Wisconsin, and Miccosukee Indian School.

One DLM Consortium partner, District of Columbia, only administers assessments in science.

In 2019–2020, ATLAS at the University of Kansas continued to partner with the Center for Literacy and Disability Studies at the University of North Carolina at Chapel Hill and the Center for Research Methods and Data Analysis at KU. The project was also supported by a Technical Advisory Committee.

1.3 Assessment

Assessment blueprints consist of the Essential Elements (EEs) prioritized for assessment by the DLM Consortium. To achieve blueprint coverage, each student is administered a series of testlets. Each testlet is delivered through an online platform, Kite® Student Portal. Student results are based on evidence of mastery of the linkage levels for every assessed EE.

There are two assessment models for the DLM alternate assessment. Each state chooses its own model.

  • Instructionally embedded model. There are two instructionally embedded testing windows: fall and spring. Educators have some choice of which EEs to assess, within constraints. For each EE, the system recommends a linkage level for assessment, and the educator may accept the recommendation or choose another linkage level. At the end of the year, summative results are based on mastery estimates for linkage levels for each EE (including performance on all testlets from both instructionally embedded assessment windows) and are used for accountability purposes. The pools of operational assessments for the fall and spring instructionally embedded windows are separate. In 2019–2020, the states adopting the instructionally embedded model included Arkansas, Iowa, Kansas, Missouri, and North Dakota.

  • Year-end model. During a single operational testing window in the spring, all students take testlets that cover the whole blueprint. Each testlet assesses one linkage level and EE. The linkage level for each testlet varies according to student performance on the previous testlet. Summative assessment results reflect the student’s performance and are used for accountability purposes each school year. Instructionally embedded assessments are available during the school year but are optional and do not count toward summative results. In 2019–2020, the states adopting the year-end model included Alaska, Colorado, Delaware, Illinois, Maryland, New Hampshire, New Jersey, New Mexico, New York, Oklahoma, Rhode Island, Utah, West Virginia, Wisconsin, and Miccosukee Indian School.

Information in this manual is common to both models wherever possible and is specific to the year-end model where appropriate. A separate version of the technical manual exists for the instructionally embedded model.

1.4 Technical Manual Overview

This manual provides evidence collected during the 2019–2020 administration of year-end assessments.

Chapter 1 provides a brief overview of the assessment and administration for the 2019–2020 academic year and a summary of contents of the remaining chapters. While subsequent chapters describe the individual components of the assessment system separately, key topics such as validity are addressed throughout this manual.

Chapter 2 was not updated for 2019–2020; no changes were made to the learning map models used for operational administration of DLM assessments. See the 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016b) for a description of the DLM map-development process.

Chapter 3 outlines evidence related to test content collected during the 2019–2020 administration, including a description of blueprint adjustments, test development activities, external review of content, and the operational and field test content available.

Chapter 4 provides an update on test administration during the 2019–2020 year. The chapter provides a summary of the Instruction and Assessment Planner for assigning instructionally embedded assessments, and a description of new data extracts for monitoring assessment administration.

Chapter 5 provides a brief summary of the psychometric model used in scoring DLM assessments. This chapter includes a summary of 2019–2020 calibrated parameters. For a complete description of the modeling method, see the 2015–2016 Technical Manual Update—Year-End Model (Dynamic Learning Maps Consortium, 2017b).

Chapter 6 was not updated for 2019–2020; no changes were made to the cut points used in scoring DLM assessments. See the 2014–2015 Technical Manual—Year-End Model (Dynamic Learning Maps Consortium, 2016b) for a description of the methods, preparations, procedures, and results of the standard-setting meeting and the follow-up evaluation of the impact data.

Chapter 7 provides descriptions of changes to score reports and data files during the 2019–2020 administration to reflect the impact of COVID-19 on the assessment administration.

Chapter 8 was not updated for 2019–2020 due to a limited and non-representative sample of completed assessments as a result of COVID-19. For a complete description of the reliability background and methods, see the 2015–2016 Technical Manual Update—Year-End Model (Dynamic Learning Maps Consortium, 2017b).

Chapter 9 describes additional validity evidence collected during the 2019–2020 administration not covered in previous chapters. The chapter provides evidence collected for three of the five critical sources of evidence: test content, internal structure, and response process.

Chapter 10 describes updates to required training and the professional development offered across the DLM Consortium in 2019–2020, including participation rates and evaluation results.

Chapter 11 summarizes the contents of the previous chapters. It also provides future directions to support operations and research for DLM assessments.