Supports and Challenges of Collecting and Using Initial, Formative, and Summative Assessment Data in Programs Serving Adults with Low Literacy Skills
May 31st, 2024 | Blogs
From June 2023 through April 2024, World Education led a community-engaged research initiative on behalf of the Adult Literacy and Learning Impact Network (ALL IN). This opportunity funded short-term research projects to unearth instructional and programmatic innovation, make visible teacher knowledge, and offer insights aligned to the National Action Plan for Adult Literacy’s goals of access, quality, and/or uptake of adult learning opportunities. This blog is one of five reports from the funded research teams of this initiative.
By Alisa Belzer
Adult education service providers face many challenges in meeting the instructional needs of learners reading at or below a 4th grade reading level (GLE), due partly to limited capacity to conduct actionable assessment. Commonly used pre- and post-tests (e.g. TABE and CASAS) lack detail on learners’ reading component skills (e.g., decoding, fluency, and comprehension) which most have yet to master (MacArthur, et al., 2010)¹. However, there are rarely resources available to implement more informative assessments. Thus, tutors and teachers may lack sufficient detailed knowledge of appropriate instructional starting points and tools for accurately assessing progress over time through formative assessment and post-tests, making it difficult to provide responsive and relevant instruction.
Little is known about actual assessment practices in programs and classrooms. Learning more about both the strengths and challenges faced could help shape improved assessment tools, professional development, instruction, and learner outcomes. Therefore, we implemented this study to fill this knowledge gap and to learn what supports are needed to collect and use assessment data that can inform instruction in deep and meaningful ways. We focused on the perspectives of practitioners (administrators, teachers, and tutors) working in programs that serve learners who test at or below the 4th GLE. This exploratory study can also pave the way for intervention research that tests the effectiveness of new assessment tools designed to provide detailed, actionable, and easy to use data, professional development designed to support their use, and the impact of reading instruction systematically informed by assessment data.
This community-engaged study posed the following research questions:
This study had two data collection components: 1) a focus group study with practitioners from around the country and 2) a nationally distributed survey to practitioners. In addition to being focus group participants, they informed and gave feedback on the development of the survey and then assisted in interpreting data and articulating findings and implications.
There was a robust response to the survey with a total of 559 responses to at least some of the questions. Respondents were about evenly split between instructors (paid and volunteer) and administrators, but by a wide margin, they were full-time employees. Although there were some important distinctions between administrators and instructors and between larger and small programs and those that do and do not receive Workforce Innovation & Opportunity Act (WIOA) funds, for the most part responses were remarkably similar across program type, job role, and training.
The most commonly selected way that respondents described learners is as enthusiastic and excited to learn (mean 3.63 out of 5) but with inconsistent attendance (mean 3.7). The choice with the lowest mean response (2.74) was that they are confident in their ability to learn how to read and have adequate time to put in the effort needed to make noticeable progress (2.91). When it came to describing their reading challenges, the mean was 3.67 or above for all options (comprehension, vocabulary, integrating background knowledge, using phonics to sound out unfamiliar words, and fluency) except learning disability where the mean was slightly lower. This demonstrates the importance of focusing on the component skills of reading.
The most common challenges that teachers and tutors face in meeting the needs of these learners are (in order): inconsistent attendance (57%), not knowing if learners have a learning disability (46%), not having initial, helpful, actionable information on what exactly the learner’s needs are with regard to developing reading skills (e.g. knowing where to begin instruction) (46%) , and not having the training to assess a learning disability related to reading (39%). Considerably fewer respondents reported challenges of not knowing how to teach reading effectively (23%), not having appropriate instructional materials (i.e., not adult enough) (23%), not having reliable strategies for measuring/documenting progress, (23%), and inconsistent program approaches to serving this population (17%).
The capacity to use assessment to inform instruction effectively may be influenced by training. However, less than half of all respondents have any relevant graduate training. Those that do are most likely to have general training in adult education (46%); only 17% were trained as reading specialists, 27% as elementary educators, and 18% as special educators. Unfortunately, professional development opportunities did not generally seem to address the training gap. Nearly half of all respondents had less than five hours of PD within about the last 5 years. Only 2% had 16 hours or more. This suggests that even when assessments provide actionable data, many practitioners may not know how to make best use of it.
Pre- and post-testing are common procedures in most programs regardless of size and funding source. Respondents reported that TABE is the most used assessment, and that post-testing is most often conducted after 40 hours of instruction. However, TABE provides little specific data on reading component skills. Notably, only about one-third of instructors administer these assessments thus further separating assessment from instruction for many. Despite their distance from the process, about three-quarters of instructors report that assessment data gives them a rough or good idea of where to begin instruction and whether progress is being made. The most common barrier to pre-and post-testing (56%) was that assessment is intimidating for learners. Less common barriers are that it takes too long to administer and does not provide actionable instructional information. Interestingly, 25% reported no barriers.
Most instructors reported doing some kind of formative assessment. 60% reported using some kind of home-grown strategies such as learning checks they have developed, observation, and asking learners to self-report. The single most common response was that they use assessments included in instructional material. Only 10% reported doing no formative assessment. The single most common barrier to effective formative assessment was the lack of standardized, agreed upon procedures in their programs. However, when combined, barriers related to time, time-how, and resources were more commonly reported.
Although the survey responses were relatively positive regarding pre-and post-testing and formative assessment procedures and the ways in which all kinds of assessment inform instruction, it is possible that a lack of training and professional development leave practitioners with a limited sense of how assessment could better inform reading instruction. Focus group participants were more specific in identifying the challenges they have observed related to assessment. They reported assessments are too long and intimidating and that learners find them stressful. Overall, they find the results unreliable, often failing to accurately reflect progress, and they are difficult to interpret for instructional purposes. Some said that what is most informative is to simply listen to learners read.
In conclusion, recommendations for next steps have emerged specifically for assessment developers and policy makers.
For assessment developers, respondents suggested that new or revised assessment instruments should address the intimidation and frustration that learners can experience in formal assessment situations and improve reliability by considering length, appearance, and relevance of assessment tasks. They also suggested that assessment results be actionable and clearly indicate specific strengths and areas for improvement, document information about learners’ background and goals, and be linked to specific curriculum materials and instructional activities. In other words, assessment could better inform instruction if it is both more specific regarding skill development needs and broader in scope to include learner experiences and goals more holistically. They also recommended that they should be simple to use, no more expensive than current assessments, and have digital and paper and pencil versions.
Additionally, respondents observed that this population takes longer to show measurable skill gains. Therefore, incremental improvement should be measurable by assessments and small improvements should be recognized in the accountability system.
World Education strives to build lasting relationships with partners across diverse geographic regions and technical sectors to produce better education outcomes for all.