Early Grade Reading: What Does ‘Good’ Look Like and How Do We Measure It?
This blog post was originally published by Chemonics in August 2020.
What do you need to measure improvement in early grade literacy? Early grade reading benchmarks and a way to frequently assess every student’s performance. Learn how the USAID Soma Umenye project collaborated with the Rwanda Education Board to develop both.
Parents send their child to school and expect that she will learn to read. A teacher is expected to deliver quality instruction and provide appropriate support to students who are struggling. A head teacher is expected to turn an underperforming school into a high-performing one. An education official is expected to ensure national education performance.
But what does good literacy performance look like for a Grade 1, 2, or 3 student? How good is “good enough,” and how do we measure it? Many countries face these are questions — and Rwanda is no different.
In Rwanda, if you’re a parent, teacher, head teacher, or education official, you likely don’t have access to frequent assessment data that measures student progress against any sort of standard reading benchmark. To make matters worse, you may not know what the benchmarks are or even if they exist. If data exists, it’s likely infrequent, sample-based literacy data (such as the Early Grade Reading Assessment, or “EGRA”). This means that while you may have a nationally representative sample to give an overall EGRA assessment, a teacher doesn’t know how her students are performing, and a parent doesn’t know how their child is performing, and that data has no potential to inform improvement.
If you don’t know how to identify or measure “good,” then you’re unable to celebrate the success of high-performing students, identify and support at-risk students, improve schools, or ultimately hold the system accountable.
So, what do you need to measure improvement in early grade literacy? You need two things: early grade reading benchmarks that set out what should be achieved by the end of a grade and a way to frequently assess every student’s performance against those benchmarks. USAID Soma Umenye, implemented by Chemonics, has collaborated with the Rwanda Education Board (REB) to develop both.
In 2019, USAID Soma Umenye and REB co-developed early grade reading benchmarks for Kinyarwanda oral reading fluency and reading comprehension for Grades 1, 2, and 3. For the first time in Rwanda, teachers and schools have access to standards that students should achieve at the end of their first, second, and third years of formal education.
USAID Soma Umenye and REB then developed an assessment to measure literacy performance against the benchmarks: Rwanda’s Local Early Grade Reading Assessment (LEGRA). LEGRA enables all primary schools to assess all students on their reading proficiency twice a year (at the end of Terms 1 and 2). LEGRA consists of four literacy sub-tests that are administered over two days. Decoding and dictation tests are administered in a group setting, while fluency and reading comprehension tests are measured in a one-on-one setting.
Several traits make LEGRA different from other national-level literacy assessments. First, LEGRA is a class-based assessment that the teacher administers to every student. This is different from other literacy assessments, such as EGRA or the Group Administered Literacy Assessment, which outsiders administer to a sample of students and which are done to a school, not by the school.
Second, LEGRA is more than just an assessment. It is part of a larger structure that allows teachers, head teachers, parents, sector, and district officials to engage with every step of the process. It drives real-time decision-making for teachers, schools, and communities. There are four stages to the LEGRA process:
- The pre-assessment meeting: Teachers and head teachers in each school meet to discuss the upcoming LEGRA, reflect on the benchmarks, and predict the reading performance of their Grade 1, 2, and 3 students.
- The assessment and marking: Teachers administer and mark the test for every student.
- The post-assessment meeting: Teachers and head teachers meet to reflect on LEGRA results and develop plans for how to use that data to improve teaching and student reading performance.
- School, sector, and district inamas (“community meetings”): Each school re-introduces the benchmarks, shares LEGRA results with parents and the community, and presents its plan to improve reading performance.
Third, and perhaps most importantly, LEGRA puts assessment data into teachers’ hands immediately. It empowers them to develop and deliver an appropriate intervention. They can see exactly where their students are in their learning and trust that the data and the process will support their actions to meet learners where they are.
While researchers want to see comparative data about country performance for Sustainable Development Goal 4.1.1a and ministers may be more concerned about their performance against the Human Capital Index (which may be positive if it drives change), most sample-based assessments that feed into international reporting don’t help the teacher, and they don’t help the teacher now. Through effective coaching and leadership, we can improve school success and are more likely to improve reading performance nationwide if we:
- Improve teachers’ understandings of an achievable benchmark
- Build their confidence to assess learners’ performance against that benchmark
- Support them to teach differently to children who are struggling
- Help them taste success
Additionally, if we can improve a parent’s knowledge of the benchmarks and provide them frequent assessment data for their child, they are better positioned to demand higher-quality education and support their child’s literacy at home.
In short, it doesn’t have to be a perfect assessment; it just has to happen in every class nationwide. LEGRA isn’t about generating an overall score for Rwanda or enabling a cross-country comparison. It’s the sum of every individual score — measured at a time when remediation is still possible for the students assessed — and measured again to see if remediation has made an impact. It’s an assessment for learning, not of learning.
Last year, USAID Soma Umenye and REB piloted LEGRA in every Grade 1-3 classroom in five districts. During this upcoming school year, we will scale up to every Grade 1 through 3 classroom across the country. LEGRA is now a part of REB’s comprehensive assessment framework. Data will be available on a dashboard at sector, district, and national level, allowing for some wider accountability and diagnostics. This will enable stakeholders to reflect on struggling schools and sectors and provide targeted support. While we designed LEGRA for use in Rwanda, the process and tool can be adapted to other contexts. USAID Soma Umenye will share the process and findings on the Development Experience Clearinghouse.
Without a doubt, COVID-19 has affected education on a global scale. Many children are engaging in some degree of remote learning; however, it remains uncertain how this will affect learning and achievement levels as time passes. We don’t necessarily need to measure learning loss; we just need to teach children how to read, and through the use of assessment data, we can better understand whether children have achieved fluency, reading comprehension, decoding, and dictation skills that underpin literacy development. Equipped with this data, we can then support teachers to pause, reflect, and deliver targeted remediation to students who aren’t meeting the benchmarks.
In Rwanda, the project is tackling the remediation part through Kinyarwanda Reading Camps. If teachers identify students as non-readers on LEGRA (those who need the most support to catch up and meet the benchmark), those students attend the camps during school holidays. The camps, run by Grade 1-3 Kinyarwanda teachers, provide targeted remedial instruction at both instructional and independent levels through inclusive approaches that are play-based, practical, and fun for students and teachers.
It may take more time to assess every student throughout the school year. But there are no shortcuts.
Collectively, we believe that LEGRA is a tool that can drive a realizable improvement in early grade reading — not just in Rwanda but in other contexts as well. Follow our national rollout in Rwanda with USAID Soma Umenye later in 2020 and into 2021 and stay tuned for future Chemonics blog posts that unpack how LEGRA supports decision-making and progress monitoring for Universal Design for Learning activities aligned with tier one and tier two of the Response to Intervention framework.
This blog represents the views of the author and do not necessarily represent the views of Chemonics. Kate Brolley is the deputy technical director for the USAID Soma Umenye Project. Previously, she has been the inclusive education coordinator for Rwanda LEARN and a manager in Chemonics’ East and Southern Africa regional business unit.