This is the first post in a two-part series exploring how education leaders can maximize student learning in an era of big data. In Part 1, we examine the most common hurdles that educators face when it comes to data and the different types of data they should consider using. Read Part 2 here.
One of Bobbi’s first actions when she took over as a new principal at a middle school was to create a data-driven school. Like many new principals, being data-driven is a rallying cry she had heard and spoken about often. Shortly after sharing her commitment to data with staff and the community, she formed a team to help her analyze the school’s data. That’s where the problems began. Her team was quickly drowning in test scores, software program usage data, demographic data, climate survey results, and stakeholders wanting to help, regularly bringing in research and ideas of new best practices to explore.
The team realized they were being given more data to consider between meetings than they could use. At the end of Bobbi's first year, the school found itself having to answer questions about a lack of results and progress on various district-level initiatives. In her second year, the district selected a number of popular initiatives that seemed to have worked in other places. Like most districts, her administration was eager for her to implement a multitiered system of supports (MTSS) and explore implicit bias trainings to improve equity. Despite the team’s excitement, little changed over the course of the year, and Bobbi was quickly losing the trust of her staff and supervisors.
Over the past 20 years, K–12 education has made great strides on the path to become data driven. However, many educators run into challenges similar to Bobbi—too much data and too little time. As a result, schools that are consistently and effectively using data seem to remain more the exception than the rule.
The disciplined and purposeful use of data can help schools define their vision, develop a plan to achieve it, and measure if they are executing the plan. However, there are four primary hurdles that limit the effectiveness of data-driven schools.
1. Data overload
Education is great at collecting data. However, much of it is never put to use. Demographic data is collected and stored; assessment results can take a year to receive; student, staff, and parent surveys are taken and rarely used; data is stored but never compared, analyzed, or summarized in a usable form. Many educators will agree it often seems like the federal government as well as state and local agencies each have their own data requirements, and yet few educators understand what the data is for. This data overload does as much to muddy the picture of what needs to be done as it does to solve problems.
2. Initiatives fatigue (or too many initiatives going on)
Research has shown the average lifespan of an education initiative is roughly 1.5 years. In conversations with principals, it’s not uncommon for them to attempt a dozen or more district-level initiatives at one time. This short lifecycle can lead to inconsistent data measures over time and constantly change the plan for achievement. If the plan is different every other year, it would be difficult to pinpoint what interventions are supporting change and which are detracting from student learning.
This problem tends to be compounded when working in high-poverty or struggling schools. For some reason, if a school is struggling, they become prime targets for more initiatives.
3. A lack of time
This third obstacle brings the challenges of data overload and initiative fatigue into focus. Time is likely the most precious resource in education. For some educators, a limited amount of time leads to them using just one or two pieces of data to make a lot of decisions, instead of the right data to make individual decisions.
4. Lack of confidence to analyze data
Finally, I am an accountant by nature, and few things excite me more than pulling up a spreadsheet of discipline or special education data sets. This is not common in education. There is understanding data, and then there is interpreting data to inform decision-making—the “so what do I do differently” based on these results. That can be hard because sometimes data from one source contradicts another source. We intentionally use tests that measure slightly different things, so when we look at results for a student or a class in combination, knowing what to do first and how is the hard part!
In order to begin addressing these challenges, we must first begin by organizing what types of data is available. In general, there are three types of data: outcome data, predictive data, and implementation data. I’ll dive into each of these in my next blog post, but here's a preview of what you should expect.
The views expressed in this article are those of the author and do not necessarily represent those of HMH.
Be the first to read the latest from Shaped.